Workshops

Workshop 1

Practical aspects of HPLC separations in omics approaches
Roland Reischl, University of Salzburg

As diverse as omics approaches are in terms of molecules of interest, they also share common challenges. The general aim and the highest premise in omics is the ideally complete coverage of all analytes present in a given sample.

In turn, this demands for analytical methods capable of delivering global selectivity and huge dynamic ranges when it comes to detection sensitivity. High performance liquid chromatography (HPLC) hyphenated with mass spectrometric (MS) detection represents the most commonly applied methodical platform to approach these goals. While the availability of modern MS technologies boosted proteomic and metabolomic feature numbers, the respective instruments generally only offer limited possibilities for fine-tuning in terms of selectivity and detection sensitivity but these are rather dependent on the mass spectrometers’ figures of merit. If screws are to be turned and selectivity as well as sensitivity are to be improved, it is usually done in the analytical workflow upstream of the MS detection. In this workshop we will therefore focus on the discussion of HPLC methodologies compatible with MS detection. Not only is there a plethora of stationary phase chemistries available offering selectivities for almost any known molecular property, also the dimensions of chromatographic columns from semi-preparative scale to microfluidic devices pose a field necessitating reasonable choices. We will try to shed a light on abilities and limitations in modern HPLC and we will try to find compromises between ease-of-use, sample throughput, loading capacity and detection sensitivity, which are strongly influenced especially by the column diameter. All of this will be discussed in the context of inherently different molecular characteristics encountered when analysing metabolites, peptides or even whole proteins.
Show more


Workshop 2

Statistics Explained – From Principle Component Analysis to Multiple Testing to Nonparametrics
Patrick Langthaler and Arne Bathke, University of Salzburg

Statistical data analysis is a firmly established part of scientific research in many scientific fields. Since statistics is a discipline in and of its own it is often difficult for the practitioner of a different field to confidently and correctly use statistical methods to help answer their research question.

In order to gain a rudimentary understanding of many statistical techniques and common pitfalls however, no extensive study of mathematics is required. In this workshop we aim to explain one common statistical procedure, principle component analysis, from the ground up, requiring only high school level mathematics as a prerequisite. We then explain the problem of multiple testing and discuss some commonly used solutions. The workshop concludes with a discussion of modern nonparametric techniques. Principal Component Analysis (PCA) is a method commonly used for dimension reduction. When hundreds of variables are observed it can be difficult to get the big picture of what is happening in the dataset. Often however, many of the variables are highly correlated with each other. This allows for the identification of new variables – the so-called principal components. Often most of the information contained in the dataset can be summarised using only a small number of principal components, making it much easier to gain a basic understanding of the dataset. Hypothesis testing is probably the best known application of statistics for scientific research. It is rare for a modern paper to only contain a single hypothesis test. We will discuss how the naive use of multiple hypothesis tests can lead to problems. We will also provide different solutions for these issues. Many statistical methods are parametric. This means that they often have to rely on the assumption of the data following a very specific type of distribution (for example a normal distribution) in order to give accurate and reliable results. Whether these assumptions are correct can be difficult to argue for a given dataset. Nonparametric methods use much more general statistical models. This leads to a broader range of scenarios in which they can be used, as well as giving the user more confidence that the requirements are actually fulfilled. We will discuss some recent advancements and research into nonparametric statistics.
Show more


The handout for this workshop is available here.


Workshop 3

Industrial aspects of biopharma characterization
Urs Lohrig, Novartis

In this workshop, the participants will gain a better understanding of how analytical characterization in biopharma is facilitated, of critical milestones and components in analytical characterization exercises, and of requirements for the analytical method portfolio.

Within the pharmaceutical industry, a major shift towards biopharmaceutical products can be observed over the last two decades. Initially, mainly functional and smaller proteins were targeted to be substituted in humans (e.g. insulins or G-CSF) – in no small part because of technical limitations. Nowadays, an array of biomolecules is in use or under development ranging from small insulins towards massive protein conglomerates in the mega-Dalton range. These compounds are able to tackle diseases which were not successfully targeted via small molecule drugs so far. For sure, antibody based drugs are the main class of molecules being developed – with their own opportunities and challenges.
But how does industry develop these products? By pure chance or purpose-driven? And once suitable expression systems are established – how are these products characterized? What is required by agencies for approval of the products? And how does industry enable and control a constant quality of the products?
This workshop aims to provide a first insight into how these processes may be performed – exemplified for the development of biosimilars. Biosimilars are considered follow-on products where the reference product is no longer under patent protection and thus amenable to competition by other companies. So how do we replicate a reference medicine? How do we start the development? What are critical steps in the process? What is needed from an analytical viewpoint? And how might this analytical viewpoint differ between industry and academia?
Show more
search previous next tag category expand menu location phone mail time cart zoom edit close