Quantitative Pathology & Bioimage Analysis


Latest stable release (0.2.3)

Towards QuPath v0.2.0

by Pete Bankhead

This page describes the first milestone of QuPath v0.2.0, you can download the latest milestone here.

It is mostly written for people who already know the software. If it is entirely new to you, you might want to check out this first.

What has been happening since v0.1.2?

QuPath v0.1.2 was released in December 2016, at the end of my time at Queen’s University Belfast.

They were simpler times. The software had only just become publicly available and didn’t have many users yet.

Now in March 2019, QuPath v0.1.2 has had more than 28,000 downloads and is used by groups all over the world. It has been applied in almost 70 publications so far.

After a short time outside academia, I took up a PI position at the University of Edinburgh in September 2018 and restarted work on QuPath.

This is now the first milestone release from Scotland: QuPath v0.2.0-m1.

Why is this called a ‘milestone’?

In short, because it’s not quite finished.

Milestone versions exist to make new features available now to the curious or adventurous, but should be used with caution. There are more changes, bug fixes and improvements planned before it is finalized.

I’m pretty excited about some of the new things and I hope you’ll like them.

Why not alpha or beta?

Because there are more numbers than Greek letters…

The aim is to update (quite) rapidly to fix issues as they emerge. The ‘milestone’ status is indicated by the -m1 tagged on to the end of the version number. The next update should be -m2, then -m3… and so on until the software is deemed stable enough to ditch the ‘milestone’ status entirely.

Then work begins on the next version…

What’s new in v0.2.0?

Firstly, everything I blogged about here should be included. But there is much more…

Pixel classifier

Probably the biggest new feature is the Pixel classifier.

If you just want to quantify stained areas, this is for you. But it opens up a lot more possibilities.

It works quite like the existing object classifier… but doesn’t need objects. Here it is in action:

You can interactively adjust the resolution, features, classifier type and parameters, as well as visualize the features and classification results as ImageJ stacks.

The pixel classifier should also automatically provide area measurements within annotated regions, or you can also use it to generate new annotations (either across the whole image or within a pre-annotated region). If you like, you can feed through the pixel classification and apply it to overlapping detections.

To help help make this work smoothly, there are some new special classifications:

  • Region for defining an area of interest, but which shouldn’t influence the classifier.
  • Ignore for defining areas that shouldn’t be used to calculate area proportion results or generate annotations (e.g. background/whitespace).

New classifications

Also, if an annotation is locked then it is not used to train the pixel classifier. This means you can finally have classified annotations that don’t thwart your future attempts at training a new classifier.

Channel viewer

It can be very hard to see exactly what is in your image whenever you have multiple color channels.

An ostensibly simple task like ‘manually count the double positive cells’ can become very awkward whenever brightness/contrast settings trick the eye.

v.0.1.2 helped a bit by letting you toggle channels on or off quickly just by pressing the corresponding number key.

Now, there is also View → Mini viewers → Show channel viewer command that lets you see all channels at once.

Annotation improvements

The annotation tools in v0.2.0 are getting better.

First, by popular demand (from @Tkilvaer here) there is a Polyline tool. Until I get around to making a better icon, it is a ‘V’ on the toolbar - so that is also the shortcut.

There are also more options to control the Wand sensitivity and smoothing in the preferences.

The Point tool behaves better that previously, and in the Preferences the new Use multipoint tool option lets you specify if it should create new points on every click or add to existing point collections.

But my favorite new annotation feature is the Ctrl + Shift or Cmd + Shift trick. This can be used to automatically clip an annotation so that it doesn’t overlap with an existing annotation, or doesn’t extend beyond an existing parent annotation.

An increasingly common use for QuPath is to annotate images to train AI algorithms.

Extensions → AI → Export training regions is a work in progress to help export annotations as deep learning-friendly labelled images. But for full control, you’re still likely better off scripting.

Classifying detections

An occasional question on the forums has been whether it’s possible to manually fix misclassified detections.

Like with so many questions, the answer was always ‘not really… just with a script’. The reason being that it is more reproducible to create a classifier, and setting classifications manually should be discouraged.

A subsequent shift in values and perspective has led me to believe QuPath ought to leave that decision up to you. So now, Set class works for detections too.

Selection mode

Taking the above to another level is Selection mode - the S on the toolbar. This turns the normal drawing tools into selection tools, and combined with ‘Auto set’ can further convert them into classification tools.


Previously, if you wanted Bio-Formats support you needed to install it as a separate extension.

Now, Bio-Formats comes with QuPath - no separate installation necessary.

This not only gives you immediate access to the many file formats Bio-Formats supports, but also the ability to write OME-TIFF images - including pyramidal images.

This option exists under File → Export region.

Huge thanks to the OME team for the fantastic work they do in making the open source readers and writers needed to make open bioimage analysis possible!


Managing large collections of images is extremely difficult - not least if they are whole slide images. OMERO is another impressive output from the OME team OMERO that offers an open source solution to that problem.

QuPath now has preliminary support for reading images directly from OMERO - acting just like a web viewer, without needing to download the whole slide. But, unlike most web viewers, you get all the annotation and analysis tools of QuPath.

Here’s a demo showing an image being opened from IDR, before applying the new polyline & pixel classification tools:

OMERO integration is at quite an early stage: it’s not possible to send images back to OMERO or handle non-RGB images. There might also be some troubles when logins are required. But it may be useful even now, and shows how to connect QuPath to read remote images.


The recommended place to ask question is now forum.image.sc.

This is a bigger, brighter user forum for multiple bioimage analysis software applications, but you can still find posts tagged ‘qupath’ specifically if that’s all you are interested in.

This means the QuPath Google Group is retiring; Help → View user forum now points to image.sc as well.

Cell detection

Cell detection is one of the most commonly-used commands in QuPath.

It has been kept mostly the same, but a few small changes were necessary and it is important to be aware that the results generated may not be exactly the same as in previous versions (but they should be similar). Specifically, the changes are:

For these reasons, you shouldn’t mix cell detections (and cell classifiers) from old versions of QuPath with the detections here. They look (and behave) very similarly - but they are not quite identical.


Projects have been thoroughly revised.

If used to be that inside the project was a ‘data’ subdirectory, and a lot of .qpdata files - and each file had the same name as an image.

Now, there is still a ‘data’ subdirectory, but it is full of bizarrely-named sub-sub-directories - one per image.


Firstly, it avoids the requirement that every image in a project must have a unique name. Secondly, it makes it possible to store more useful information about images than whatever can be squeezed into a .qpdata file… which will enable other improvements in the future.

Two things to note for now:

Here’s a screenshot showing both of these in action:

New classifications

A side effect is that it’s now also a bit easier to read ImageData within a script, e.g.

def project = getProject()
for (entry in project.getImageList()) {
    def imageData = entry.readImageData()
    print imageData.getHierarchy().getAnnotationObjects()
    imageData.getServer().close() // best to do this...

Somewhat technical (but important!) things

Speed & stability

Many of v0.1.2’s performance woes could be traced back to that little ‘Hierarchy’ tab on the left of the screen, which try to give you a ‘tree view’ of the hierarchy.

If you had a lot of objects (e.g. many cells), and then QuPath would suddenly hang, there is a strong change that tree view was responsible - especially if you had just been selecting objects.

A few steps have been taken to try to avoid this being troublesome:

If this makes little sense, feel free to ignore it. Synopsis: QuPath should behave better and faster.

Java 11

QuPath now uses Java 11 rather than Java 8.

This is a big change, which means that QuPath can more easily keep up to date with the latest Java improvements.


QuPath reads pixels and metadata from something called an ImageServer.

This needed to be very thoroughly revised for several reasons:

The changes address many of these problems.


QuPath previously used (or tried to use) the original OpenCV Java bindings, but this was hard to maintain across different platforms and some things just didn’t quite work.

Now, QuPath uses JavaCPP, and has been updated to OpenCV 4.0.1 (from 3.1.0). This makes OpenCV much easier to use from commands and scripts.

Some of the new features, like the pixel classifier, depend on this.

Java Topology Suite

A new dependency in QuPath is Java Topology Suite (JTS).

This makes it much easier to do fancy things with shapes, and made it possible to incorporate many improvements to the reliability and performance of the object hierarchy.

Scripters might like to know you can also easily change between any QuPath ROI and a JTS Geometry:

def pathObject = getSelectedObject
def roi = pathObject.getROI()
def geometry = roi.getGeometry()

Similarly, you can convert most ROIs (i.e. not point ROIs) into java.awt.Shape objects as well:

def shape = roi.getShape()

Object hierarchy & measurements

A somewhat involved technical note… not everyone needs to know it

QuPath stores objects in a hierarchy, which is documented in detail here.

This has proved pretty successful, but it has some weird idiosyncrasies that can be confusing - especially when annotations overlap one another. The rules were:

Generally, that works fine because annotations don’t normally overlap - or, if they do, not usually in terribly important ways.

However, it is possible to have detections located inside overlapping annotations; in that case, you can’t tell just by looking at the image what is a descendant of what. In that case the measurements can be surprising. For example, it’s possible to see this in v0.1.2:

New hierarchy measurements

The hierarchy remains (at least for now), but its behavior has changed somewhat:

Basically, if it looks like a cell is inside the region then it should be counted - no matter what else is going on with the hierarchy. So the same arrangement of objects as above gives these measurements:

New hierarchy measurements

I think that ultimately this is more intuitive, but it is an important change. Please explore and give feedback if you a) prefer it, b) would rather have the old way, or c) find any bugs.

There can still be surprises using the ‘covers’ rule, for example when drawing inside a ‘hole’ that is within a larger annotation. In general, it is best try things out and check what it does… and also report any bugs.

Note for scripters: If you change the object hierarchy, you must remember to fire a hierarchy update before making measurements… otherwise it won’t know to update its spatial map of where everything is, which is essential to get the measurements correct.

You should fire update events anyway, but now it is even more important.

Constructing objects and ROIs

If you need to create ROIs or PathObjects in scripts, you can (and should) do it now with the helper classes ROIs, PathObjects and ImagePlane:

import qupath.lib.regions.ImagePlane
import qupath.lib.roi.ROIs
import qupath.lib.objects.PathObjects

int z = 0
int t = 0
def plane = ImagePlane.getPlane(z, t)
def roi = ROIs.createRectangleROI(0, 0, 10, 10, plane)
def pathObject = PathObjects.createDetectionObject(roi)

print pathObject

A user directory, not an extensions directory

Previously, if you dragged a .jar file onto QuPath it could be installed in a specific extensions directory.

This was necessary to add Bio-Formats, for example.

You don’t need to install Bio-Formats separately any more, but the extensions directory lives on… kind of. It is now actually a sub-directory inside a general ‘user directory’.

Why does that matter?

Because the user directory is more general, it can also store other things. For example, in the Preferences there is a Create log files option, and those logs go in the user directory.

Other uses for the user directory are likely to emerge in the future. For example, it would be nice to be able to store things like default lists of classifications or stain vectors. That isn’t possible yet… but it could be one day.

Old extensions may not be compatible!

In particular, don’t install the old Bio-Formats extension - it’s no longer needed, and can only cause trouble.

What’s still to come?

There are many more things that need to be done, some more visible than others:

These may not all make it into v0.2.0.

But, if not, v0.3.0 should not be so far away…

Will v0.1.2 and v0.2.0 be compatible?

In some ways, but not all.

You should be able to read .qpdata files from v0.1.2 in v0.2.0-m1.

But you can’t read v0.2.0-m1 projects in earlier versions.

You should beware of making any changes to old projects with v0.2.0-m1 as this could make them incompatible with v0.1.2.

And because of the cell detection changes, you may not get identical results when using different versions.

As a general rule, for any project it is best to stick to the same version of the software and carefully check the results yourself.

And do keep in mind that milestone releases are works-in-progress - so don’t rely on things remaining unchanged. On the other hand, you can influence the direction of any changes by giving feedback!