“Two hundred scientists walked into a bar…”
I know this sounds like the beginning of a bad joke, but in this case, it was actually the beginning of the 2017 Colorado Protein Stability Conference, held in July at Beaver Run Resort in Breckenridge, Colorado. In attendance was a range of scientists and researchers from all over the world, with one common subject in mind: Protein Stability.
One of the highlights of this conference was the pre-conference workshop. This was a free workshop for all attendees of the conference, which was sponsored by Malvern Panalytical. The workshop included a range of presentations from industry specialists, focused on critical methods of assessing particle content in protein based formulations, as well as key methods for assessing protein stability. Technologies discussed included light scattering, calorimetry, image analysis, nanoparticle tracking analysis, and more.
As part of the workshop, several breakout sessions were held covering all of the primary technologies offered by Malvern Panalytical for assessing protein formulations. These breakout sessions provided an opportunity for scientists, existing and new users to have an open discussion with Malvern Panalytical technology specialists and key scientific opinion leaders to further understand the techniques, best practice, and critical data analysis required to obtain the best quality data possible. A summary of the key discussion points for each of the breakout sessions is contained below:
Dynamic Light Scattering
- What are the optimal protein concentrations for proteins in DLS, and how can I determine if my sample is in that range?
The protein calculator contained in the Zetasizer software was demonstrated – this provides a rapid mechanism to calculate the ideal concentration range to be used with these systems.
- How would you compare DLS to alternative nanoparticle counting techniques such as NanoSight?
Above 1 nm, particle counting techniques have the advantage of giving the true number of particles in solution. DLS gives an intensity-/volume-/mass-weighted distribution, and has a large working concentration range of 0.1 – 100 mg/mL. Both techniques together can give powerful and thorough characterization information.
- When looking at DLS data, and trying to assess data quality, how do you know if the result is a good fit or a bad fit?
- Polydispersity is a good place to start: the higher the reported value, the more challenging the fit will become. A polydispersity index (PDI) >0.1 means that you have more than one species in your sample.
- It’s important to understand that the calculated value from DLS is the diffusion coefficient (not the size). This value is then transformed into an equivalent spherical size that corresponds to the same diffusion coefficient.
- Consider the effect of viscosity on size. The diffusion coefficient will be combined with the viscosity using the Stokes-Einstein equation to calculate the size.
- Non-ideal effects such as protein-protein interactions will impact the apparent size. You should measure size as a function of concentration to understand concentration dependence.
- The Z-average diameter only fits to 90% of the correlation function, so it won’t exactly match the visual distribution of intensity vs. size. However, it does a better job as a QC method, since it is more repeatable, and is a much simpler fitting function.
- Sometimes it can be useful to filter the sample using a 0.1 μm or 0.02 μm syringe filter, or in some samples, collect the main peak from a desalting/SEC column.
- Glycerol and other low MW co-solutes will resolve as a small diameter species at high concentrations, and can impact the Z-average diameter and the PDI.
- Polydispersity is a good place to start: the higher the reported value, the more challenging the fit will become. A polydispersity index (PDI) >0.1 means that you have more than one species in your sample.
- Can you use DLS to detect the binding of a 30 kDa ligand to an antibody using DLS?
Considering that the average molecular weight of an antibody is approximately 150 kDa, the binding event would result in a total MW of 180 kDa. Using the protein calculator in the Zetasizer software, you can estimate the reported size of these two MW species (the antibody alone and the antibody combined with the ligand). As ‘spherical proteins’, they would report sizes of around 5.07 nm and 5.48 nm respectively. Although these two would not be resolvable from one another, the change in MW should result in a ‘significant enough’ change in diffusion coefficient, that the Z-average diameter of the sample or the mean of the peak from the distribution analysis should change enough to be detectable.For samples undergoing isothermal heat treatment for stability assessment, a plot of count rate versus time can be used to assess changes in the sample. Absolute count rate is very sensitive to the average particle size in a sample, and so should very easily show any significant changes.
- What is the Mark-Houwink parameter and what can it be used for?
This parameter is typically used in polymer sciences, but is also applicable to protein samples. It relates the molecular weight of a sample to the overall intrinsic viscosity of the dispersed species. The equation is shown below, where [η] is equal to the intrinsic viscosity, M is the molecular weight, and K and a are dependent on the solvent-polymer (protein) system.
Nanoparticle Tracking Analysis
This technology was covered in two active sessions with approximately 15 attendees each. Participants in the NTA session were interested in possible regulatory requirements for particle detection and counting in the submicron range, and clarification on what might be expected. There have been a number of publications, including one by the FDA, where particles in the submicron range have proven to be a clear indicator of later adverse effects. The consensus was that NTA remains a very informative and useful technique that warrants continued investigation for assessment of particles in this size range.
Many existing users had questions on instrument operation and on optimizing experiments for broadly polydisperse distributions. A number of issues could be resolved by upgrading to the latest software, available at no charge from Malvern’s website. Sample testing strategies were discussed, including combining or comparing analyses at different dilutions to get a full picture of the reversibility of aggregates and the full size range.
Of most interest to many participants was how NTA compared or correlated with other techniques. There are examples of these studies found in the scientific literature for various applications, as well as Malvern’s technical literature that covers a wide portfolio of complementary techniques.
Automated Imaging with Raman Microscopy
- Within this discussion group there were two major application areas:
- Applications for biotherapeutics, small molecule pharmaceuticals, forensics and other industrial applications
- Complex mixture separation – small molecule and consumer goods
- A lengthy discussion was held relating to the identification and classification of particles found in a drug product (DP) and stability lots.
- A point of concern was the difficulty of doing drug substance and/or development particle identification in samples with a very high protein concentration.
- Another discussion took place, relating to the comparison between flow microscopy and stationary techniques, such as Morphologically-Directed Raman Spectroscopy (MDRS) as used in the Morphologi.
- IT was concluded that flow microscopy and even light obscuration are critical techniques, and are perfectly suited for particle counting and total particle assessment of samples; however, they do not have the capacity to definitively identify the type of particle present. Although shape and morphology can be used as a preliminary tool, is it not an absolute answer.
- The use of a secondary probing technique such as Raman Microscopy allows for the collection of component-specific information additional to shape and morphology, which can definitively identify the type of particle, allowing for its source to be further examined in troubleshooting applications.
- An open discussion was held on sample preparation techniques including filtration and liquid suspension methodology, and the key benefits and issues of each method.
- Filtering a sample through a substrate allows the acquisition of pure spectra, following the collection and identification of particles from a larger volume of sample in a shorter collection time. The limitation here is that many particles are able to pass through the filter substrate, such as silicone oil droplets.
- Dispersed sample preparations such as those collected in the thin-path wet cell are an excellent way of assessing total particle content and provide a better method of collecting all particles present. The limitation of this method is the limited volume of the sample in the cell.
- The market is currently pressing toward automation and high-throughput techniques for all of the counting and particle assessment techniques, as well as particle identification. Where is the current technology in this requirement?
- The FDA does not require imaging (yet), but acquiring this data provides a complete documentation process for your current particle control strategy.
- Techniques such as Raman spectroscopy are not considered high throughput techniques, due to the sensitivity of the instruments, although recent advances in sensitivity have resulted in significant improvements in acquisition times for these types of samples.
- There is an open need for faster tools in the industry, and sometimes the tools sold by vendors do not take into consideration the needs of the client businesses.
- The final discussion point was the request from a number of audience members to have a client-vendor panel discussion relating to the technologies evolution, which we will be trying to accommodate very soon!
Resonant Mass Measurement
- What cleaning reagents do you currently use?
SDS, PCC54, Contrad, 30% Nitric acid, NaOH, nothing (just rinse with MilliQ water). - How long do you load detergent for? What is your cleaning cycle?
Answers varied and nonspecific (15 seconds to 5 minutes); no consistency or common cleaning cycles.
- How good does a job your cleanup do – do you test it?
Most do not test, but 4 experienced users do water blanks. Pass/Fail criteria ranged. One user had a pass/fail criteria of 0 particles in 5 minutes, another user set their criteria as 0 particles in 10 minutes.
- Do you have sticky samples that require more cleaning?
Two users reported issues when running samples with high polysorbate content, which required additional cleaning. - What is the viscosity range of samples used- anything over 10cP?
No users reported running noticeably viscous samples. - Do you filter samples or cleaning reagents?
This question sparked a very lively and lengthy discussion. Many users noted that certain filters seemed to shed particles into their samples and suggested that filtering be performed cautiously and only when necessary. Many other users reported clogging problems related to filters. Some brands of Eppendorf tubes were also reported to shed particles and contaminate samples. - Concentration ranges – where are you measuring? Why?
Customers measure at all concentration ranges; no particular concentration range preference emerged. The concentrations measured were sample- and experiment-specific. - Limit of Detection (LOD) – are you using automatic or User-Entered? If User-Entered, how do you do this?
No customers reported setting the LOD via the User-Entered option; all allow Archimedes to automatically set the LOD. - Endpoints – time, volume, # particles?
There was no common preference for determining the experimental endpoint. Users appreciate the flexibility and options offered by the software. The endpoint used was sample-dependent and somewhat dictated by the nature of the experiment. For users setting # particles as their endpoint, the question of how many particles are required to generate confidence in the result sparked a lively discussion on the minimum number needed for statistical accuracy. # of particles ranged from 200 to 1000 particles. Few users were collecting more than 1000 particles. Users agreed that experimental run time was important and the trade-off between low concentrations and long run times was of primary importance. - Do you have a preference for data analysis and presentation: particle size distribution, specific range, concentration/counts/%counts?
Users appreciate the flexibility of the software; this is of primary importance. Simplifying the software was not of interest to participants. While the size range 1-5 microns was frequently mentioned as a focus size range for many, data was always reviewed as a subset of the total size distribution and all measured sizes in a sample were important and reported. Regulatory requirements varied per product, and collection of data for regulatory purposes was just one form of data collection and instrument use.
Differential Scanning Calorimetry
The two breakout sessions were both full, with about 15 attendees in each. The discussion was driven by questions from the audience and centered around:
- New functionalities of PEAQ-DSC such as similarity analysis, 21 CFR Part 11 compliance, quality assurance through multiple scans and customized reporting. This information and more details on the new features of PEAQ-DSC can be found in the launch webinar by Dr. Ronan O’Brien.
- Different stability metrics (T1/2, Tonset, ratio of calorimetric to Van’t Hoff enthalpy) accessible from a DSC thermogram and their interpretation and significance, including case study examples from process development and (pre)formulation stages. Many of the examples discussed during the breakout session were taken from the two webinars:
- Orthogonal use of thermal stability metrics and data on protein aggregation propensity. Here we discussed the synergies between DSC and other biophysical techniques such as DLS and multi-detector SEC.
-
Taylor Dispersion Analysis
The predominant discussion in the Taylor Dispersion Analysis breakout group was: What is Taylor Dispersion, and how can it help me understand the stability of my protein formulation? Since this is such a new and emerging technique, the discussion focused on the basic principles of the technique, as well as the key benefits to protein formulation groups. The summary of this is:
- Taylor Dispersion Analysis is an ultra-low sample volume technique (40 nL) which can assess the hydrodynamic diameter of a protein sample, and report a volumetric size. This size can be beneficial when compared with a DLS size value, as DLS reports intensity. The comparison of these two average values allows you to understand your primary particle size (TDA) and the presence of any aggregates (DLS) in a sample.
- Advanced application of Taylor Dispersion analysis allows for the measurement of the Diffusion Interaction Parameter (kD) using a single 4 μL sample plug. In comparison to DLS, this removes the need for serial dilutions, which is a time-consuming manual process and consumes a large volume of sample. The automated Viscosizer TD system with 96-well plate functionality means that a large number of samples can be assessed in a much shorter space of time than with DLS.
- Additional measurement capabilities of the system include the measurement of the viscosity of a sample using just 6μL. The combination of size, viscosity, and kD in a single instrument makes the Viscosizer TD perfectly suited to the assessment of formulation stability.