ZEN 3.11 - Software Manual
ZEN 3.11 - Software Manual
Original Manual
© 2024 Without the prior written consent of ZEISS, this document or any part of it must neither be translated nor reproduced or
transmitted in any form or by any means - including electronic or mechanic methods, by photocopying, recording or by any in-
formation or filing system. The right to make backup-copies for archiving purposes shall remain unaffected thereby. Any viola-
tions may be prosecuted as copyright infringements.
The use of general descriptive names, registered names, trademarks, etc. in this document does not imply that such names are
exempt from the relevant intellectual property laws and regulations and therefore free for general use. This shall also apply if
this is not specifically referred to. Software programs shall entirely remain the property of ZEISS. No program or subsequent up-
grade thereof may be disclosed to any third party, copied or reproduced in any other form without the prior written consent of
ZEISS, even if these copies or reproductions are destined for internal use at the customer's only, the only exception being one
single back-up copy for archiving purposes.
Content ZEISS
Content
1 General Information....................................................................... 16
2 First Steps...................................................................................... 22
3 Basic Concepts............................................................................... 38
9 Reference....................................................................................... 814
10.1 ApoTome.2................................................................................................1054
10.1.1 Introduction................................................................................1054
10.1.2 Principle of Imaging Using Fringe Projection ...............................1054
10.1.3 Optimum Acquisition Conditions.................................................1058
10.1.4 List of Recommended Objectives ................................................1059
10.1.5 Preparation: Phase Calibration ....................................................1062
10.1.6 Step 1: Define Channels Using Smart Setup ................................1062
10.1.7 Step 2: Grid Focus Calibration .....................................................1064
10.1.8 Step 3: Perform ApoTome Experiment ........................................1067
10.1.9 Step 4: Process the Resulting Image............................................1069
10.1.10 Step 5: Perform Z-Stack Acquisition ............................................1071
10.1.11 Step 6: Perform ApoTome Deconvolution ...................................1073
10.1.12 Reference ...................................................................................1076
10.2 Axio Observer ............................................................................................1077
10.2.1 Auto Immersion ..........................................................................1077
10.3 Axioscan ....................................................................................................1081
10.3.1 Introduction................................................................................1081
10.3.2 Working with ZEN slidescan........................................................1081
10.3.3 Ensuring Correct Focus ...............................................................1107
10.3.4 Reference ...................................................................................1117
10.4 Celldiscoverer 7..........................................................................................1155
10.4.1 Introduction................................................................................1155
10.4.2 Performing a Celldiscoverer Calibration .......................................1156
10.4.3 Using Customized Sample Carriers ..............................................1159
10.4.4 Shading References on Celldiscoverer .........................................1160
11 Maintenance ..................................................................................1312
12 FAQ ...............................................................................................1316
Glossary .........................................................................................1321
Index..............................................................................................1332
1 General Information
1.1 Welcome
See also
2 Basic Concepts [} 38]
2 First Steps [} 22]
Explanation Example
Software controls and GUI elements. Click Tools in the menu bar.
Press several keys on the keyboard simultane- Press Ctrl + Alt + Del.
ously.
Follow a path in the software. Select Tools > Options > Language.
Link to further information within this docu- See: Text Conventions and Link Types
ment. [} 16].
CAUTION, and NOTICE are standard signal words used to determine the levels of hazards and
risks of personal injury and property damage. Read all safety messages in the respective chapters
carefully. Failure to comply with these instructions and warnings can result in both possible per-
sonal injury and property damage and involve the loss of any claims for damages.
The following warning messages indicating dangerous situations and hazards are used in this doc-
ument.
CAUTION
Type and source of danger
CAUTION indicates a potentially hazardous situation which, if not avoided, may result in minor
or moderate injury.
NOTICE
Type and source of danger
NOTICE indicates a potentially harmful situation which, if not avoided, may result in property
damage. In addition, NOTICE warns of data loss or corrupt data as well.
Info
Provides additional information or explanations to help the user better understand the con-
tents of this document.
4 5 6
1 Index
List of keywords to help you find topics and content quickly.
2 Topics
Contains the structure tree with a list of all the topics.
3 Search
Search through the entire text.
It supports partial strings but not wildcards.
4 Structure tree
Enables you to navigate through topics sequentially. A > indicates a topic has subtopics.
5 Content panel
6 Print
Enables you to print the currently displayed topic.
ZEN is a microscope software for microscope control, image acquisition, image processing and
image analysis. The field of application of the software covers general tasks and applications in
microscopy or image acquisition in routine and research, among others in life sciences.
The software is not intended to directly or indirectly produce medical diagnostic results.
To avoid possible data loss, corrupted data or incorrect measurements, check and observe the fol-
lowing:
§ Only trained persons are allowed to work with the software, e.g. performing measurements.
Each user must have been informed about the possible risks connected with work in the field
of microscopy and the relevant area of application.
§ Check if all hardware components are correctly configured in the MTB configuration software.
§ Check the accuracy of a generated scaling (pixel sizes) immediately after starting up the sys-
tem. When using manual microscopes, you must always check that the correct scaling has
been selected before the acquisition of each image, in order to avoid incorrect measurements.
The accuracy of all scalings generated should be checked at regular intervals.
§ Save all your data, such as images, measurement data, archives, reports, forms and docu-
ments at regular intervals on an external storage medium. Otherwise, this data may be lost as
a result of operational errors or hardware defects.
Instruction For detailed information on how to use the hardware (e.g. microscope or microscope system), re-
Manuals fer to its Instruction Manual or ask your ZEISS Sales & Service Partner.
System For more details on system requirements for the software, please refer to the corresponding
Requirements Installation Guide.
Local and Na- Observe local and national health and safety regulations for the location of installation and during
tional Health the use of the microscope.
and Safety Reg-
ulations Consult with your ZEISS Sales & Service Partner if these regulations are in conflict with the in-
stallation requirements of the microscope.
System and Information about the individual components, enhancements, and accessories can be obtained
Third-Party from your ZEISS Sales & Service Partner. Also refer to the documentation of third-party manu-
Components,
facturers.
Accessories
ZEISS draws the user's attention to the fact that the information and references contained in this
documentation may be subject to technical modifications, in particular due to the continuous fur-
ther development of ZEISS products. The documentation enclosed does not contain any warranty
by ZEISS with regard to the technical processes described in the documentation or to certain re-
produced product characteristics. Furthermore, ZEISS shall not be held liable for any possible
printing errors or other inaccuracies in this documentation, unless proof can be furnished that any
such errors or inaccuracies are already known by ZEISS or that these are not known to ZEISS due
to gross negligence and that furthermore ZEISS has for these reasons refrained from eliminating
these errors or inaccuracies appropriately. ZEISS hereby explicitly draws the user's attention to the
fact that this documentation only contains a general description of the technical processes and in-
formation, the implementation of which in any individual case may not be appropriate in the form
described here. In cases of doubt, we recommend the user to consult ZEISS service and support.
This documentation is protected by copyright. ZEISS has reserved all rights to this documentation.
It is prohibited to make copies, partial copies, or to translate this documentation into any other
language, except for personal use.
ZEISS explicitly draws attention to the fact that the information contained in this documentation
will be updated regularly in compliance with the technical modifications and supplements carried
out in the products and furthermore that this documentation only reflects the technical status of
ZEISS products at the time of printing.
Disclaimer
Note that this software contains an extension that enables you to connect it with the third party
software ImageJ. ImageJ is not a ZEISS product. Therefore, ZEISS undertakes no warranty con-
cerning ImageJ, makes no representation that ImageJ or derivatives such as Fiji or related macros
will work on your hardware and will not be liable for any damages caused by the use of this ex-
tension. By using the extension, you agree to this disclaimer.
Third Party If you use software or hardware from other manufacturers (third party) or perform software
Software/Hard- changes on the system computer, you must first check that the hardware or software or the soft-
ware ware changes are compatible with ZEISS software. Following the installation of software or hard-
ware from other manufacturers, you must always perform a thorough check to ensure that the
performance of all your ZEISS software components is unaffected.
Applicable Stan- This software and the corresponding documentation has been designed, created and tested in
dards & Regula- compliance with the following regulations and directives:
tions
§ Quality management system certified to DIN EN ISO 13485
§ Documentation and Safety Notes according to DIN EN 82079-1 (VDE 0039-1)
1.9 Contact
If you have any questions or problems, contact your local ZEISS Sales & Service Partner or one
of the following addresses:
Headquarters
Email: [email protected]
ZEISS Portal
The ZEISS Portal (https://portal.zeiss.com/) offers various services that simplify the daily work with
your ZEISS systems (machines and software).
Service Germany
Email: [email protected]
2 First Steps
2.1 Starting Software
Info
Using pre-recorded Images
For using pre-recorded images when starting the software, in the menu Tools > Options >
Startup, the Reload Last Used Documents checkbox must be activated.
Prerequisite ü Microscope and hardware components are switched on and are ready for operation.
ü ZEN software is installed on your computer.
1. Double-click on the program icon on your desktop.
See also
2 Creating a ZEN Connect Project [} 625]
Info
Reopening
After you have exited this screen, you can reopen it by clicking File > Show Welcome
Screen, if necessary.
1 2
1 Overview Area
In this area you can get a general overview for ZEN, see Overview Area [} 23].
3 Start Buttons
With these buttons you can start by opening existing documents or create new ones, see
Start Buttons [} 24].
Parameter Description
What's new Opens the browser to show you what is new in this version of ZEN.
Learn & Support Opens the browser to display the site with support and learning mate-
rial for ZEN.
Do not show this Activated: The Welcome Screen is not displayed on startup anymore.
at startup
You can also change this setting in Tools > Options >Startup/Shut-
down > Welcome Screen.
Parameter Description
Open Existing Opens a file browser to select an already existing file you want to
open in ZEN.
Acquire New Im- Only visible if you have started a profile with acquisition.
age Closes the Welcome Screen to allow the setup of a new acquisition.
Create New Opens the New Document dialog to create a new ZEN Connect
Project project.
See also
2 Creating a ZEN Connect Project [} 625]
2 New Document Dialog [} 817]
Parameter Description
Help icon Activates the "drag & drop“ help function. A question mark ap-
pears beside the mouse pointer. Move the mouse pointer to a
place in the software where you need help. Left-click on the de-
sired location. The online help opens.
See also
2 File Menu [} 814]
2 Edit Menu [} 818]
2 View Menu [} 820]
2 Acquisition Menu [} 820]
2 Graphics Menu [} 822]
2 Macro Menu [} 823]
This area contains the main tabs for microscope and camera settings (Locate tab), image acquisi-
tion (Acquisition tab), image processing (Processing tab), and image analysis (Analysis tab). The
main tabs are organized in an order which follows the typical workflow of experiments in bio-
science or material science.
See also
2 Locate Tab [} 855]
2 Acquisition Tab [} 859]
2 Processing Tab [} 873]
2 Analysis Tab [} 873]
2 Extensions Tab [} 874]
1 2 3 4
2 Image Views
Area where you can switch between different image views by selecting the correspond-
ing tab in the list.
3 View Options
Area for general and specific view options.
4 Image Area
Area where images, reports, and tables are displayed.
See also
2 Image Views [} 990]
2 General View Options [} 1029]
This area contains mainly the tools for image and file handling (e.g. Image Gallery) and hardware
control (e.g. Stage/Focus tool). Depending on your system configuration, other tools can be
available. The tools are described in the corresponding chapters of the online help.
See also
2 ZEN Connect Tool [} 650]
2 Stage Tool [} 985]
2 Focus Tool [} 938]
2 Incubation Tool [} 970]
2 Microscope Tool [} 983]
2 Images and Documents Tool [} 966]
2 Macro Tool [} 979]
Here you see tabs of all open documents. Click on a tab to view the image/document. On the
right end of document bar, you find buttons to switch view mode from Exposé to Splitter mode
and further view options (View menu).
Info
An asterisk (*) next to an image/document title indicates that changes have been made to this
document which are not yet saved. Save your pictures/documents from time to time in order
to avoid data loss.
Parameter Description
Opens the exposé view mode.
Exposé Mode
Splitter Mode
See also
2 Exposé Mode [} 1051]
2 Splitter Mode [} 1051]
2 View Menu [} 820]
Parameter Description
Scaling Displays which lateral scaling is currently being used. If you click on
the arrow, the Scaling dialog [} 831] will be opened. There you have
access to advanced scaling settings and the scaling wizard.
System Informa- Always shows the latest, currently active process that the system is
tion performing.
Progress bar Displays the progress of the currently active process. Each new
process added supersedes older still active processes. If you click on
the arrow button, a window opens with a list of all processes in
chronological order. You can stop a process that is running using the
Stop button.
Performance indi- In this group you will see an overview of the performance of individ-
cators ual computer components:
§ Free RAM indicates how much physical memory is still available.
§ Free HD indicates how much space is still available on the hard
drive onto which the next image is to be acquired (see Tools > Op-
tions > Saving).
§ CPU indicates the usage of the Central Processing Unit.
§ The small status bar provides an overall assessment of the system
usage.
§ GPU indicates the usage of the Graphics Processing Unit by ZEN
and ZEN related services. It is also visualized by a small status bar
on its right.
Note: This GPU indicator is only visible if you your computer has
Microsoft Windows 10 version 1709 or higher.
Frame Rate Indicates the current frame rate in frames per second (fps) used by the
active camera for producing new images. Please note in most cases
that at speeds greater than 100 frames per second, this value cannot
always be accurately determined.
Pixel Value Displays the gray value to the image at the current position of the
mouse pointer. In the case of multichannel images, the gray value/
channel is displayed for up to 4 channels.
Position Displays the X/Y position (in pixel coordinates) of the mouse pointer in
the image.
Information (i) If you click on the icon, a window opens with System Messages
[} 30].
Storage Folder Displays the location where new images are automatically saved. This
path can be changed in the menu Tools > Options > Saving.
Optional If you click on the arrow the Alignment Tool window opens.
Status: Airyscan
Detector Align-
ment
See also
2 Airyscan Detector Adjustment [} 1294]
If you right-click on a system message the Copy button will appear. Left-click on Copy button to
copy the message to clipboard. Then paste it into a text file or an E-Mail. The idea behind is that
you can easily send error messages to your support team for example. This copy/paste function
works for all upcoming system messages or error messages within the application as well.
Parameter Description
System information that arises during normal operation. This system
information does not lead to an interruption of the workflow. The in-
formation window is not displayed automatically.
Information
Info
Hundreds of messages can accumulate in the course of a session. A maximum of 300 mes-
sages are displayed. To display messages for a certain category, activate or deactivate the cor-
responding checkboxes.
Info
If the Select Automatically checkbox is activated, the software uses the language which is
set in the system settings of your computer. This is the default setting.
1. With the Show All mode deactivated (default setting), only the basic functions of tool win-
dows or view options are shown.
2. To show the advanced settings or expert functions of tool windows or view options, click
Show All.
This chapter refers to the manual configuration of the microscope components in ZEN lite. All mi-
croscope components definitions will be stored in the meta data of the acquired image.
à The tool will open. Consider that the button Show all is activated.
2. For Objective select that objective you will use for your acquisitions.
3. Select all other microscope components you will use (i.e. Optovar, Reflector, etc.).
You have successfully configured your microscope components.
Info
If you have activated the Select automatically button in the status bar under Scaling (stan-
dard settings), the scaling will be calculated on the basis of your definitions. If you want to
perform a manual scaling, read the chapter Creating Manual Scaling [} 36].
This topic guides you through acquiring your first camera image with the software.
Prerequisite ü You have connected and configured a microscope camera (i.e. Axiocam 305 color/mono) to
your system.
ü You have started the software.
ü You have configured the microscope components (e.g. objective, camera adapter) and you
are using the automatic or manual scaling.
ü You are on the Camera (ZEN lite only) or Locate tab.
ü You see your microscope camera available in the Active Camera section. If not, select the
camera from the list.
1. Position your sample on the microscope and adjust the microscope to see a focused image
through the eyepieces.
2. Adjust the tube slider of the microscope to divert the image to the camera (e.g. 50% cam-
era and 50% eyepieces).
3. Click Live.
à The Live Mode will be activated. In the Center Screen Area you will see the camera
live image. By default, the live image shows a crosshair helping to navigate on the speci-
men. In the chapter Adjusting Live Image Settings [} 35] you will learn how to opti-
mize live image display.
Info
If you do not see a focused image, please refocus the specimen on the microscope. You may
activate the focus bar as an additional aid. Right-click in the Center Screen Area to open the
context menu. Select the entry Focus Bar. The focus bar will be shown.
See also
2 Document Bar [} 28]
Annotations are the generic term for all the graphics (e.g. rectangle, arrow, scale), measurements,
texts or other metadata (e.g. recording time) that you can add to your image.
See also
2 Adding Annotations to Images or Movies [} 61]
Prerequisite ü You have started the Live mode via the Live button and see the camera’s live image in the
Center Screen Area.
ü Under the image area you see the general view options on Dimensions tab, Graphics tab
and Display tab.
1. In the Dimensions tab activate the Range Indicator checkbox. This will mark overexposed
(too bright) areas in the live image in red and underexposed (too dark) areas in blue.
2. On the Display tab click 0.45. The display curve will be adapted to a gamma value of 0.45.
This will set the optimum color presentation. If you do not see this button, activate the
Show all mode.
3. Move the controls under the display curve left and right in order to directly adjust the val-
ues for Contrast (Black) 1 , Gamma 2 , and Brightness (White) 3 in the live
image.
1 2 3
Info
With the settings above the display of the live image will be adapted. These settings will also
be transferred to your acquired image. This will not change the camera settings.
Info
Availability of Created Scaling
If you create a manual scaling, this data is by default user specific, i.e. the scaling is not glob-
ally available for all users of the PC. The scaling is saved in C:\Users\Username\Docu-
ments\Carl Zeiss\ZEN\Documents\Scalings.
4 If you create a scaling that should be available to all users on the PC, you have to place the
file into C:\Program Files\Carl Zeiss\ZEN 2\ZEN 2 (blue edition)\ZEN\en\Docu-
ments\Scalings.
Prerequisite ü You have an object micrometer oriented horizontally on the microscope stage.
ü You have defined all your microscope settings correctly with the Microscope Control tool, or
the Microscope Components tool (ZEN lite only). In our example we use an objective with a
10x magnification.
1. Acquire an image of the scale in your object micrometer using the objective to be scaled
manually, see also Acquiring Camera Image [} 33].
à The image is displayed in ZEN.
2. In the bottom status bar click the small arrow in the Scaling section.
à The Scaling dialog opens.
3. In the dialog, deactivate Select Automatically.
4. In the Create new scaling section, click Interactive Calibration....
à The calibration window opens in the image area.
5. Click Single Reference Line (selected by default) and activate Automatic Line Detection.
6. Draw in the reference line along the scale.
7. Enter the true distance between both scale lines in the calibration window. In our example
this is 500 micrometers.
8. Enter a name for the scaling (i.e. Obj 10x) and click Save Scaling.
You performed a manual scaling for your objective. Repeat this sequence for all objectives you
need a manual scaling for. Always ensure that you select the correct objective in the Microscope
Control tool (or the Microscope Components tool in ZEN lite), as well as the matching per-
formed scaling in the status bar.
Info
4 The function Automatic Line Detection calculates the theoretical maximum of the refer-
ence line‘s both end points to the closest scale lines in the image. Thus, the distance will
be calculated with sub-pixel accuracy.
4 If you have defined a manual scaling for an objective and activate the checkbox Select
Automatically, the software will use the measured scaling instead of the theoretic one.
You will recognize this via the label Measured instead of Theoretic beside the pixel size.
1. Click File > Exit. Alternatively, you can use the shortcut Alt+F4 or click Close in the pro-
gram bar.
Info
If you have not saved your files, the Save Documents dialog opens before the software
closes. Select the files you want to save or the files you want to discard. If you are connected
to ZEN Data Storage, you can also select to save and upload the files to the server.
3 Basic Concepts
3.1 Image Acquisition
The software completes all microscopes, microscope systems like LSM, and cameras from ZEISS to
efficient and tailor-made imaging systems. With little training you will interactively control the en-
tire workflow from image acquisition, processing and analysis.
Depending on the system you can capture single images, multi-channel fluorescence images or
video sequences with up to 16-bit per channel image information. The software contains the so
called "Smart Setup" which automatically delivers several proposals for the optimal dye and wave-
length combinations for an experiment.
A wide range of different cameras can be used, starting from simple consumer cameras through
to high-resolution and high-sensitivity microscope cameras. The seamless integration of cameras
into the software allows you to acquire complex images and image sequences by one mouse
click. For best results we recommend to use ZEISS Axiocam microscope cameras.
After acquiring an image, it is immediately displayed on your screen. It can then be optimized us-
ing a wide range of techniques:
Even with ZEN lite, you are able to perform simple interactive measurements. The measured val-
ues (e.g. lengths, areas, and perimeters) are made available in a data table and can be processed
further using spreadsheet programs.
With the optional modules Image Analysis and Measurement, you can perform professional
analysis tasks like generating automatic measurement procedures or measuring microscopic struc-
tures interactively.
For the ZEN software we developed a special file format called *.czi (Carl Zeiss Image). Besides the
image data itself, the image format saves a lot of additional data, for example the date of acquisi-
tion, microscope settings, exposure values, size and scale details, contrast procedures which were
used. Also, all annotations and measured values are saved with the file.
To learn more about the ZEISS image file format we recommend to visit the ZEISS Microscopy
Community forum in the internet (http://forums.zeiss.com/microscopy/community/forum.php).
There you can join interesting discussions or download the detailed documentation of the file for-
mat.
3.5 Extensions
The extensions concept allows you to extend ZEN dynamically in its functionality. From a technical
point of view the concept is comparable with plug-in`s or add-on`s. For the extensions we re-
served a special area (Extensions tab [} 874]) within the software so that you can find all loaded
extensions at a glance.
In the upper right corner of the program window under Design, you can select a Light or Dark
screen layout. Optionally, you can change the design with the shortcut Ctrl+D.
Prerequisite ü You are in the Tools menu > Customize Application dialog.
ü The Toolbar tab is selected by default.
To enlarge the workspace, move the slider left or right. Click Reset to come back to the default
setting.
This function allows you to undock/dock a tool window. An undocked tool window can be posi-
tioned anywhere on the screen.
1. Click Undock to undock a tool window. Once undocked, the tool window can be moved
around by clicking and dragging it on the blue bar.
2. Click Dock to dock a tool window back to its place in the left tool area.
Info
With the dock all tools function in the Workspace Configuration [} 25] you can globally at-
tach all undocked tool windows back to their initial position.
Info
Persistence of the Grid
If you add the grid during a Live or Continuous acquisition, the grid setting is persisted for all
future acquisitions. To deactivate the grid, it has to be deactivated in the same mode as it had
been activated, i.e. during Live or Continuous acquisition.
Prerequisite ü You have opened an image in which you want to display a grid, or you have started a Live or
Continuous acquisition.
1. Right-click into the image and select Grid. Alternatively, click Graphics > Grid in the menu
bar.
à The grid is displayed in the image.
You can also configure a login with your Windows account. In this case an additional login button
with the for Windows accounts (USERNAME@DOMAIN) is displayed on the login screen.
2. Activate Enable User Management checkbox if it is not already activated and restart the
software.
à User management is active.
4. Enter a Name for the new user. Optionally, enter a Description and/or enter and confirm a
Password for the new user.
5. Click OK.
You have successfully created a new user. All settings are effective with the next start of the soft-
ware. Make sure that you remember password, username, etc. Now you can add the user to a
specific user group, see Adding Users to a Group [} 45].
Prerequisite ü You are in the User And Group Management dialog (Tools > Users and Groups...).
ü Enable User Management is activated.
1. Click Groups.
à The tab displays all currently configured user groups.
2. Click .
à The New Group dialog opens.
3. Select the Type of group.
4. Enter a Name for the new group. Note: Do not use a backslash (\) in the group name, ex-
cept for an Active Directory group.
5. Enter a Description for the group. This step is optional.
6. Click OK to close the New Group dialog.
à The respective group is added to the tab.
7. Click OK to close the User and Group Management dialog.
You have created a new group. You can now add users to this group (see Adding Users to a
Group [} 45]).
See also
2 Managing Group Privileges [} 47]
2 Managing Access Rights for User Groups [} 46]
Prerequisite ü You are in the User and Group Management dialog (Tools > Users and Groups...).
ü Enable User Management is activated.
1. Click Groups.
à All available groups are displayed. By default, you have an Administrators group. To
manage access rights for user groups, see Managing Access Rights for User Groups
[} 46].
2. Select the group you want to add a user to, e.g. Administrators.
3. Click .
à The group properties dialog opens. Under Data > Members, all the members of the
group are displayed.
4. In the Members list, click .
à The Select User dialog opens.
5. Select the user you want to add to the group and click OK.
6. Click OK to close the properties dialog of the group.
You have successfully added a user to the group.
You can restrict the access for user groups to certain functionalities of the software. If you use
ZEN Data Storage, you can also assign privileges to user groups, see Managing Group Privileges
[} 47].
Prerequisite ü You are in the User and Group Management dialog (Tools > Users and Groups...).
ü Enable User Management is activated.
1. Open the Groups tab.
à All available groups are displayed.
2. Select the group you want to manage access rights for.
3. Click .
à The Group Properties dialog opens. In the left column, under Access Rights, all areas
for which you can configure access rights (e.g. Menu, Processing) are displayed.
4. Under Access Rights, select the area where you want to restrict access rights, e.g. Left
Tool Area.
à A list of elements is displayed for which you can restrict the access.
5. Click on the Check mark button in front of the respective entry.
à The button changes to a Minus. In the example the selected group is denied the access
right for the Processing tab in the Left Tool Area:
If you use ZEN with ZEN Data Storage, privileges are assigned to user groups. They specify what
actions members of the group can perform in the software.
The software contains various pre-defined roles, each with different sets of privileges. Typically,
the software contains one user group for each role. However, you can create any number of user
groups with arbitrary privileges.
à The privileges for the ZEN Data Storage groups are displayed. Each privilege is displayed
with its Name, a Description, and the Application Name. Here you can see which
privilege is designated for groups in ZEN, ZEN core, or the ZEN Storage Processing
Server. If the field Application Name is empty, the respective privilege is generally
available.
5. Select the privileges for the user group.
à You can click on one of the pre-defined Privilege sets or activate individual checkboxes
to create a custom set of privileges.
6. Click on OK.
You have now set/changed the privileges for this group.
You have the possibility to configure your user management to allow to log in with Windows user
and password.
Info
Active Directory with ZEN Data Storage
If you are using Active Directory login with ZEN Data Storage, some special points need to be
observed:
4 During the installation of ZEN Data Storage, on the Settings tab of the installer, you have
set the parameter Enable Active Directory to True. For more information, also refer to
the installation guide for ZEN Data Storage.
4 The ZEN Data Storage server must be part of the same Windows domain from where the
software tries to login with its Windows credentials.
Prerequisite ü ZEN is open with active user management, and you are signed in as administrator.
1. Go to Tools > Users and Groups.
à The User and Group Management dialog opens.
2. Click Groups.
à The tab displays all currently configured user groups.
3. Click .
à The New Group dialog opens.
4. For Type, select Active Directory.
5. For Name, click .
à The Select Group dialog opens.
à The fields for object type and location are filled with a default. To change them, click Ob-
ject Types or Locations to open another dialog to select the respective Object Types
or Locations.
6. In the text field below, enter the name of the group you want to select. If you are not sure
if your name is correct, click Check Names to open a dialog and select the suitable entry.
For information on looking up the groups your own account belongs to, refer to the instal-
lation guide.
7. Click OK.
à The name is displayed in the New Group dialog.
8. Enter a Description for the group. This step is optional.
9. Click OK to close the New Group dialog.
à The respective Active Directory is added to the groups.
See also
2 ZEN Data Storage Client [} 269]
2 Adding Users to a Group [} 45]
2 Managing Access Rights for User Groups [} 46]
3.7.7 Options
The options apply to all users, regardless of the user groups to which the user is assigned.
Parameter Description
Check the follow- Here you can specify certain rules or criteria for a password that is
ing rules for a created. If the checkbox is activated, the rules must be fulfilled when
password a new password is created.
The following rules can be adjusted:
– Min. number Sets the minimal number of lower case letters a password must have.
of lower case For example, if you set 2, the password must contain at least two
characters lower case characters, like e and f.
– Min. number Sets the minimal number of upper case letters a password must have.
of upper case For example, if you set 2, the password must contain at least two up-
characters per case characters, like C and G.
– Min. number Sets the minimal number of digits a password must have. For exam-
of digit char- ple, if you set 3, the password must contain at least three digits from
acters 0 - 9, like 5, 6 and 7.
– Min. number Sets the minimal number of special characters a password must have.
of special For example, if you set 1, the password must contain at least one spe-
characters cial character, like &.
– Minimum Sets the minimal length a password must have. For example, if you
length set 9, the password must contain at least nine characters (any from
above).
Do not allow user If activated, it is not allowed to use an existing user name as pass-
name as password word for the software.
Disable the reuse Activated: Disables the reuse of a specified number of last pass-
of last used pass- words.
words
– Number Sets the number of passwords which cannot be reused after each
other. For example, if you enter the number 3, you have to assign 3
different passwords one after another before you can use (reuse) an
old password.
Parameter Description
Disable the use of If activated, you can create and edit a list which contains passwords
common pass- which you can lock for usage.
words
– Edit Opens an editor to edit the list of common passwords. For example, if
you add the entry "123456789Password" this password cannot be as-
signed from a user.
Force users to Activated: The user must change his password after the specified pe-
change password riod of time elapses.
after period of
Deactivated: The password never expires.
time
– Days before Specifies the number of days after which the password expires.
expiry
Lock user after Activated: Locks the user after a number of wrong password entries.
wrong password
entries
– Maximum Sets the number of attempts the user has if he enters a wrong pass-
number of word. For example, if you enter 3, the user can enter a wrong pass-
wrong entries word three times before his user account is locked.
Lock screen after Activated: After a period of inactivity, the screen is locked and the
certain time span user must enter his/her password to continue working.
Deactivated: The password never expires.
– Minutes until Specifies the time span after which the screen is locked.
screen lock
– Export... Specify the location on the file system where the database should be
exported.
If you want to set up the user management with Active Directory so that you can log in with your
Windows credentials, it is useful to know the Active Directory groups to which your account be-
longs.
Info
With a very high number of experiments saved in the Experiment Manager (more than 500),
moving from the Locate to the Acquisition tab takes a considerably long time. Delete redun-
dant or no longer used experiments to avoid this lag time.
Prerequisite ü You have switched on and configured your microscope system and all components.
ü You have successfully started the software.
1. In the Left Tool Area, open the Acquisition tab.
2. In the Experiment Manager, click and select New from the dropdown.
à A new experiment is created.
3. Enter a name for the experiment, e.g. "3-channel_experiment".
In the following chapters you will learn how to set-up and run multi-channel experiments quick
and easy.
Info
MicroToolBox (MTB)
Make sure that you work with a fully motorized microscope system. In advance all microscope
components (e.g. objectives, filters, etc.) must be configured correctly in the MicroToolBox
(MTB) software.
There are two variants for setting up multi-channel experiments. The first variant uses Smart
Setup, while the second variant uses the Channels tool. Both variants have similarities and differ-
ences, which are presented in the following overview:
Commonalities
§ Fluorescent dyes and transmitted light techniques can be selected from a database.
§ Hardware settings for motorized microscopes, which take the properties of the selected dye
and the available microscope hardware into account, can be created automatically.
§ Bases for experiments can be created using both variants and experienced users can optimize
settings further.
Differences
Info
If you see the error message Smart Setup calculation failed, it was not possible for Smart
Setup to calculate any proposal. This may be because the filters and light sources available on
the system do not allow an image of the dye to be acquired with a good signal strength or
with little crosstalk. The channel for this dye or the contrast method cannot therefore be cre-
ated. In this case, try selecting another, similar dye.
Should the error message be displayed for all dyes that you select, this may be due to one of
the following causes:
- no light source has been configured or the light source is switched off.
- no camera has been configured on the system, the camera is not connected or (on some
models) has been switched off.
à Depending on which dye you have selected and the microscope hardware available, up
to three different proposals (Best Signal, Fastest, Best Compromise) are displayed. These
differ in terms of signal strength, crosstalk and speed. Select the proposal that best
meets the needs of your experiment.
7. To select a proposal (if there is more than one), for all active configured channels, activate
the proposal.
8. To optimize experiment settings additionally, click on a Motif button.
The Automatic button is set as default.
à By the Motif buttons you can optimize acquisition parameters and camera settings auto-
matically either for a high quality (Quality button) image or a faster acquisition but re-
duced image quality (Speed button). Find a more detailed description of Motif buttons
in Smart Setup dialog.
9. To optimize experiment settings, adopt the suggestion and leave Smart Setup, click on the
OK button.
à The added channels are adopted automatically into the Channels tool.
10. Click on the Set Exposure button in the Action buttons bar on top of the Acquisition tab.
à The exposure time is now measured for all three channels one after the other. This is
adopted into the settings for the channels. Following the measurement of the exposure
time, the multi-channel image is acquired automatically and displayed in the Center
Screen Area.
11. To save the experiment together with all the settings, click in the Experiment Manager on
the Options button.
12. In the Experiment Manager click on the Save entry in the drop-down list.
You have set up the multichannel experiment using Smart Setup, executed it and then saved the
configuration. This means that you can repeat the experiment as often as you like using the same
settings.
Info
This workflow does not work with LSM systems.
7. To save the experiment together with all the settings, click in the Experiment Manager on
the Options button.
8. Click on the Save entry in the drop-down list.
You have set up the multichannel experiment using the Channels tool, executed it and then
saved the configuration. This means that you can repeat the experiment as often as you like using
the same settings.
The software is delivered with a large number of preset dyes. Dyes and its parameter are stored in
the dye database. In case you have created a custom made filter cube, you may need additional
dyes. You can create new dyes and a custom dye database with the Dye Editor in the Tools
menu.
1 2 3 4
4 Dye spectra
Here you see the available dye spectra. Click on the relevant tabs to display the emis-
sion, excitation, or extinction spectra. On the Overview tab you can see all spectra at
a glance.
See also
2 Creating a Custom Dye [} 57]
File menu
Open folder... Opens several ExEml files that have been saved to- Ctrl + S
gether in the same folder.
Save As... Saves the open ExEml file under a new name.
Dataset menu
Cut Cuts the selected data set and copies it to the clip- Ctrl + X,
board. Shift +
Del
Tools menu
If none of the preset dyes matches your requirements, you can create a new custom dye.
5. In the Names section, click in the Name input field to change the name of the custom dye.
à The name enables you to find your custom dye in the Smart Setup or Channels tool.
6. Fill in other known parameters in the following sections, e.g. Providers, Properties, Ref-
erences.
7. Change to the Emission tab.
8. To fill in Wavelength (nm) and Spectrum value to the table on the right, you have 3 op-
tions:
Prerequisite ü You have created a new dye in the Dye Editor dialog.
ü You are on the Emission tab.
1. In the Wavelength (nm) > Spectrum value table, click on the Plus icon.
Add 10 rows, by pressing Shift and the Plus icon.
Add 100 rows, by pressing Alt + Shift and the Plus icon.
If you have recorded Wavelength (nm) and Spectrum value of your custom dye in another
source, e.g. an Excel sheet, you can copy the data and paste it directly into the Dye Editor dialog.
Prerequisite ü You have created a new dye in the Dye Editor dialog.
ü You have opened your source, e.g. an Excel sheet with the emission data.
1. Arrange the data in two columns like on the Emission tab.
2. Mark the data and copy it via Ctrl + C. Alternatively, right-click on the marked area to open
the shortcut menu and select Copy.
3. Change to the Dye Editor dialog.
4. On the Emission tab, click in the first row of the Wavelength (nm) > Spectrum value ta-
ble and press Ctrl + V. Alternatively, right-click in the first row to open the shortcut menu
and select Paste.
à The emission data is inserted in the table and the emission graph is displayed next to the
table.
5. Save the dye via File > Save...
You have saved a new dye in your custom database.
You can use the new dye in the Smart Setup or in the Channels tool.
If the data of a preset dye, e.g. the emission data of the DAPI dye resembles your custom dye, you
can copy the data from the preset dye into a new custom dye.
NOTICE
Do not edit preset dyes
Editing preset dyes leads to irreversible data loss.
Note that you should therefore only copy the data set of a preset dye.
Prerequisite ü You have created a new dye in the Dye Editor dialog.
1. Open a second Dye Editor dialog via Tools > Dye Editor....
2. Confirm the occurring dialog with Yes.
Perform the following steps to open the ZEN Dye Database in the second Dye Editor dia-
log:
3. In the File menu, click on the Open folder... entry.
4. In the Windows dialog, open the DyeDataBase with all preset dyes by following the file
path:
C:/Program Files/Carl Zeiss/ZEN/ZEN 2 (blue edition)/ZEN/Resources/DyeDataBase
à All preset dyes are displayed in the second Dye Editor dialog.
5. Use the Search input field, to quickly find a preset dye with similar values, e.g. DAPI.
6. Select the desired dye.
7. Click on the Copy the current dataset as XML button.
Info
You can paste other properties of preset dyes just as the emission data, e.g. All Spectra data
or Emitter Properties.
Prerequisite ü You have successfully created a first custom dye and a custom dye database, see Creating a
Custom Dye [} 57].
ü You have opened the Dye Editor dialog.
1. In the Dataset menu select New....
à The Create new dataset dialog opens.
2. Enter an ID for your new dye, e.g. DyeC2 and confirm with OK.
à The entry Unnamed other dye appears in the Exeml Collection list.
3. Add a name for the new dye by clicking on the Plus icon in the Names section.
3. In the Experiment section, select Load Default Experiment from the drop-down list.
4. Click on OK.
à The Options dialog closes.
5. Configure your experiment for startup default or select an existing experiment.
6. Click on Options and select Set As Start up Default.
The active experiment is set as startup default now and marked with the symbol.
This experiment is loaded when starting the ZEN software.
It is possible to generate an experiment template. This cannot be modified but used as a starting
point for your acquisition.
If you want to generate an experiment template, the following steps are necessary.
1. Create an experiment.
2. In the menu bar, click on Help > About ZEN...
à The About ZEN dialog appears.
3. Click on Show ZEN Information.
à The Application Information dialog appears.
4. Open the Folders tab, scroll to the Experiment Templates entry, and double-click on it.
à The folder Experiment Setups will open.
5. Export the experiment generated before into the Carl Zeiss\ZEN\Templates\Experiment
Setups folder.
6. Create a descriptive name for the experiment template.
You have successfully created an experiment template.
It is possible to generate and save multiple templates.
To apply an experiment template, see chapter Applying an Experiment Template [} 60].
If you want to apply an experiment template as a starting point for your acquisition, the following
steps are necessary.
Prerequisite ü The experiment template to be used has been created before, see chapter Creating Default
Experiments as Templates Automatically [} 60].
1. On the acquisition tab, click on the options button and select New from Template.
à The available experiment templates will be shown on the right side.
2. Click on the name of the appropriate template.
The selected experiment template will be loaded as starting point for the acquisition.
Using annotations you can highlight specific regions, add metadata or specific measurement in-
formation to images and movies. You can use the annotations within several workflows and func-
tionality:
§ For your own information, you can display metadata within images and movies.
§ When working with Mean ROI view, you use the annotation functions to draw regions into
the image.
§ When analyzing images manually, you can record measurement results in the image.
§ When exporting images or movies, you can add annotations that are visible in the export for-
mat of your images and movies.
The annotation tools are accessed via the Graphics tab or the Graphics Menu [} 822].
Remove Annotations
1. Select the annotation in the image or in the Annotations/Measurements table, and press
the Del key. Alternatively, right-click the annotation and select Delete.
Additionally, the annotations can be edited, rearranged, and formatted. For more information,
see Editing Annotations in Images or Movies [} 62].
See also
Edit Annotations
You can change the annotations type or the unit of the time or of the measurement.
1. To edit text in a textbox, double-click the textbox and change the text.
2. In the image, select the annotation and right-click to open the context menu. Select For-
mat Graphical Elements.
à The Format graphical elements dialog opens.
3. Edit the annotation and click Close to save.
Rearrange Annotations
1. Select the annotation with the left mouse button and drag and drop the annotation to an-
other position.
Format Annotations
1. In the image, select the annotation and right-click to open the context menu. Select For-
mat Graphical Elements.
à The Format graphical elements dialog opens.
2. Format the annotation and click Close to save.
See also
2 Format Graphical Elements Dialog [} 1040]
2 Adding Annotations to Images or Movies [} 61]
ZEN offers the functionality to use a machine learning model trained for denoising directly during
a continuous acquisition. Note that currently only models trained with ZEN are supported.
Prerequisite ü You have a trained denoising model available in ZEN. For more information, also refer to In-
tellesis Denoising [} 511]. Note that in order to be able to train a model, you need the li-
cense for the AI Toolkit.
1. On the Acquisition tab, set up your acquisition with the respective tools, e.g. Imaging
Setup, Channels and Acquisition Mode tool. Note that the Acquisition ROI must have a
frame size of at least 1024x1024 pixel! Depending on the chip size of your camera you
have to check image size and binning to ensure that the frame size meets this requirement.
à Your acquisition is set up.
2. In the Channels tool, activate Live Denoising.
à The dropdown is activated.
3. In the dropdown, select the denoising model you want to use.
à Denoising is now activated for your acquisition. Note that for widefield systems with a
dual camera, only the first model will be applied to both channels.
4. Click Continuous.
You have started a continuous acquisition with simultaneous denoising.
See also
2 Creating and Training an Intellesis Denoising Model [} 514]
3.9.1 Introduction
If you want to work with focus strategies, you have to use the Focus Strategy tool. There you
can select the suitable strategy and adjust the corresponding settings, e.g. defining Z-positions
manually or automatically, and update these during the experiment. Note that the availability of
certain focus strategies depends on your system and available components (e.g. Definite Focus.2).
General Preparations
Prerequisite ü To use focus strategies, you will need a motorized focus drive/Z-drive.
ü You are in the Left Tool Area on the Acquisition tab.
ü You have created a new experiment [} 51], defined at least one channel [} 51] and ad-
justed the focus and exposure time.
1. Activate the acquisition dimensions (e. g. Tiles, Time Series) you want to use for your ex-
periment.
2. Open the Focus Strategy tool.
Select this focus strategy to automate the focusing of your specimen before and during acquisi-
tion with the help of the Software Autofocus. This is particularly useful for Time Series or Tiles
experiments.
Prerequisite ü To use the Software Autofocus focus strategy, you will need the module.
For LSM systems, the Autofocus module is part of the system license.
ü You have completed the general preparations [} 63] for using focus strategies (experiment
created, at least one channel defined, acquisition dimensions activated).
ü You are on the Acquisition tab in the Focus Strategy tool.
1. Select the Software Autofocus entry from the drop-down list. Note that the Z values de-
fined by Tiles Setup are ignored.
2. In the Reference Channel section, select the channel that you want to use for the focus
action from the list. Expand the section if you do not see it in full. Note that the reference
channel does not necessarily have to be an acquisition channel. Not all acquisition modes
can be used as reference. For LSM 980 LSM Lambda tracks and Online Fingerprinting can-
not be selected as reference channels. Also, Airyscan SR and MPLX tracks might fail when
using such tracks as reference for reflex autofocus function.
3. Click on the Set as Reference Channel button.
4. In the Time Series Loop and/or Tiles Loop sections of the Focus Strategy tool you can
define when focus actions should be performed during the course of the experiment.
5. Open the Software Autofocus tool.
6. Adjust the autofocus settings (e.g. Quality, Sampling, etc.) to your experiment conditions
or use the default settings first.
7. Set up your tile and/or time series experiment.
8. To start the experiment, click on the Start Experiment button.
You have successfully used the Software Autofocus to bring images into focus automatically dur-
ing the experiment.
Select this focus strategy to use the definite focus device to stabilize the focus in the event of tem-
perature fluctuations during your Time Series experiments.
Prerequisite ü To use the Definite Focus focus strategy, you will need the Definite Focus hardware device.
ü You have completed the general preparations [} 63] for using focus strategies.
ü You are on the Acquisition tab in the Focus Strategy tool.
1. Select Definite Focus as focus strategy from the dropdown list. Note that Z values defined
by Tiles Setup are ignored.
2. In the Stabilization Event Repetitions and Frequency section select Standard mode.
This mode will use our recommend default settings for stabilization. When selecting the Ex-
pert mode, you can adjust all settings according to your needs.
3. Set up a Time Series experiment, see Acquiring Time Series Images [} 390].
4. Use the Live mode to set the focus position using the focus drive.
5. To start the experiment, click on Start Experiment.
à Definite Focus is initialized at the start of the experiment at the current focus position.
The focus is then stabilized in accordance with your settings during the time series exper-
iment. You will be reminded to set the focus accordingly prior to the experiment starting.
You can do this by navigating to a suitable location (position or Tile region) and starting
live or continuous. You can then continue with the experiment or cancel it.
You have successfully used the Definite Focus to stabilize the focus during a Time Series experi-
ment.
For Local or Global Focus Surfaces you need to select the focus strategy Use Z-Values/Focus
Surface defined in Tiles Setup. This strategy is selected by default, if you have licensed the
Tiles module. Then you can acquire tiles images along local or global focus surfaces (tile region
specific/position specific) and use the focus strategy for optimal image results.
A local/global focus surface ensures that all tiles are in focus on tilted or irregular specimens. Local
focus surfaces for tile regions are interpolated on the basis of the focus positions of support
points. Positions automatically have a horizontal focus area with the Z-value of the position.
The following guide explains how to use the focus strategy for local focus surfaces.
Prerequisite ü To use the Use Z-Values/Focus Surface defined in Tiles Setup focus strategy, you need a
license for the Tiles module.
ü You have read the introduction for using focus strategies (experiment created, at least one
channel defined, acquisition dimensions activated), see Introduction [} 63].
ü You are on the Acquisition tab in the Focus Strategy tool.
1. Select the Use Z-Values/Focus Surface defined in Tiles Setup entry from the drop-
down list (if not selected by default).
2. If you want to set up the entire focus strategy, you can click Optimize this Focus Strat-
egy to use the Focus Strategy wizard, see Focus Strategy Wizard [} 933].
3. In the Z-Values/Focus Surface section, select Local (per Region/Position).
4. Under Initial Definition for Z-Values/Focus Surface, select the By Tiles Setup entry
from the drop-down list.
See also
2 Tiles & Positions with Advanced Setup [} 344]
2 Creating a Local Focus Surface [} 351]
2 Focus Strategy Tool [} 928]
3.10 Import/Export
This example describes the workflow for the Image Export. The typical workflow is the same for
both export and import of images.
Using the Quick Export function, you can export images automatically with a single click of the
mouse, without setting the processing function parameters.
Info
If you export a time series or z-stack image using the Quick Export function, a movie is auto-
matically generated using the default values of the Movie Export method.
See also
2 Image Export [} 120]
Using the Quick Export function, you can export movies automatically with a single click of the
mouse, without setting the processing function parameters.
Prerequisite ü You have acquired or opened an image from a time series or a z-stack image.
1. In the Right Tool Area, in the Images and Documents tool, click . Alternatively, click
File > Export/Import > Quick Export.
The selected image is automatically exported with the default settings of the Movie Export pro-
cessing function (AVI, original size). The movie is placed in the folder that is currently set in the
Movie Export processing function.
See also
2 Movie Export [} 127]
To export multichannel images, see the following description. Not all the parameters are used in
this example. For a description of all available export parameters, see Image Export [} 120].
You have exported images of the individual channels and a pseudo color image of your multi-
channel image and automatically saved them (as a ZIP archive) in the defined folder.
Info
File Size Limitation
The export of images to most file formats is subject to limits of the file size that can be created
(typically ca. 2 GB). For this reason, we recommend to use the Big TIFF-format. If this is not
possible because the Big TIFF-format does not fit your needs, check if the image size does not
exceed the resources of your computer when exporting your preferred format. A message will
inform you, when the size exceeds your resources. You have the following options to work
around:
4 Resize the resolution to scale the image down with the Resize slider.
4 Activate Original Data and deactivate Apply Display Curve and Channel Color.
4 With a rectangle, select a region of interest to minimize the image itself.
4 Activate Crop to Selection and Generate New Tiles to cut the image again into tiles us-
ing the row and column options.
To export multiscene images, see the following description. Not all the parameters are used in this
example. For a description of all available export parameters, see Image Export [} 120].
For the image export, you can define how the exported files should be named. The naming con-
vention for exported images used by ZEN is described here. For the naming convention of the xml
files created by the image export, see Naming of xml Files for Exported Images [} 70].
<Prefix>_c<no.> Contains channels. Note that if you have activated Use Channel
Names the actual names of the channels are written into the file
name of the exported image.
The prefix is the image name you entered in the Prefix field. Per default, the prefix field contains
the image name. The suffix contains all image information. Example: If you export an z-stack with
channel 1 and channel 2, and activate Crop to Selection and Generate New Tiles, the image
name of one exported image could be: TMA1_z02c1+2m1.tif
Parameter Description
<Prefix>_x<no1- Displays the x position and size. The first number (no.1) indicates the
no2> start of the (subset) image in relation to the coordinate system of the
original image, the second number (no.2) indicates the total width of
the exported (subset) image in pixel.
<Prefix>_y<no1- Displays the y position and size. The first number (no.1) indicates the
no2> start of the (subset) image in relation to the coordinate system of the
original image, the second number (no.2) indicates the total height of
the exported (subset) image in pixel.
Example 1: If you export your entire image which has a width and height of 1024 pixel, the
name of the exported image could be: TMA1_z02c1x0-1024y0-1024.tif.
Example 2: If you only export a subset with a rectangle drawn into the image, the name of the
exported image could be: TMA1_z02c1x241-692y49-448.tif.
This means, the exported subset image starts at the x value 241 of the original image and has a
width of 692 pixel.
If you select to also create a xml file, the image export generates two files, one contains general
information about the image, the second contains the image metadata. Both xml files have a dis-
tinct naming, depending on which format you have selected in the export.
The long format naming for the xml files includes the file type suffix in the xml titles as well as
metadata for the metadata xml, the short format naming for the xml files excludes the file type
suffix from the xml titles as well as shortens metadata to meta for the metadata xml. See the fol-
lowing example:
Prerequisite ü You have saved the individual images of a Z-stack in a folder on your computer. The images
have been named systematically, e.g. Image_Z0, Image_Z01, etc.
1. On the Processing tab, open the parameters for Image Import (or via the File > Export/
Import > Import).
à You will see the default settings of the parameters for Image Import.
2. Activate the Z-stack checkbox. Deactivate all the other dimensions.
3. Enter the interval for the Z-stack. The number of planes is set automatically if the images
have been named systematically.
4. In the Import from section, select the folder that contains the individual images of your Z-
stack image.
à The individual images are displayed automatically in the list under the import directory.
5. Click on the Check Consistency button. This allows you to check whether the images can
be imported correctly.
à A check mark appears after each file name in the list. You can import the individual im-
ages.
6. Click on the Apply button at the top of the Processing tab.
The individual images are imported and combined to form a Z-stack image. You have successfully
imported a Z-stack image from individual images.
Info
Powerful GPU
For AI models requiring a GPU for execution, the GPU needs to fulfil certain criteria, i.e. be
powerful enough. For the use of AI models, the recommended hardware configuration is 8GB
GPU. Only Nvidia GPUs and the CPU are supported. For further reference, also refer to the re-
quirements of the Intellesis functionality, see Remarks and Additional Information [} 505].
In ZEN, you can download AI models from arivis Cloud, e.g. for instance segmentation. These
models are packaged as Docker containers and therefore need the Docker Desktop software to
work.
Prerequisite ü Docker Desktop is installed and running on your computer. For more information about
Docker Desktop, see Requirements for Docker Desktop [} 1313].
ü You have the model you want to download available on arivis Cloud and your PC is con-
nected to the internet.
ü You have created an access token and entered it in ZEN, see Creating and Entering an Ac-
cess Token [} 206].
1. Go to the Applications, open the AI Model Store tool and click Open AI Model Store.
à The AI Model Store dialog opens to display all available AI models.
2. On the left side, click Download from arivis Cloud.
à The model list is updated and displays all models on arivis Cloud that you have available.
3. Select the model you want to download in the list.
à The Properties section on the right displays more detailed information about your se-
lected model.
4. In the Properties section, select the model version you want to download and click Down-
load Model.
à The download of the model starts and the progress is displayed in the status bar. You
have the possibility to cancel the download with the button next to the progress bar.
After a successful download, the model is displayed in the AI Model Store tool. It is now avail-
able in ZEN, even without an active internet connection.
You can now use the model, e.g. in the Automatic Segmentation step of the image analysis.
See also
2 AI Model Store Tool [} 888]
2 Creating an Image Analysis Setting From an AI Model [} 429]
2 Importing AI Models [} 72]
2 Creating a New Image Analysis Setting [} 403]
2 Automatic Segmentation [} 944]
Info
Powerful GPU
For AI models requiring a GPU for execution, the GPU needs to fulfil certain criteria, i.e. be
powerful enough. For the use of AI models, the recommended hardware configuration is 8GB
GPU. Only Nvidia GPUs and the CPU are supported. For further reference, also refer to the re-
quirements of the Intellesis functionality, see Remarks and Additional Information [} 505].
In ZEN, you can import existing AI models, e.g. for instance segmentation, that you have exported
from another application. These models are packaged as Docker containers and therefore need
the Docker Desktop software to work.
Prerequisite ü Docker Desktop is installed and running on your computer. For more information about
Docker Desktop, see Requirements for Docker Desktop [} 1313].
ü You have the model you want to import available on your PC.
1. On the Applications tab, open the AI Model Store tool and click Open AI Model Store.
à The AI Model Store dialog opens to display all available AI models.
2. Click .
à A file browser opens.
3. Select the model you want to import, and click Open.
After a successful import, the model is displayed in the AI Model Store tool. You can now use
the model, e.g. in the Automatic Segmentation step of the image analysis.
See also
2 AI Model Store Tool [} 888]
2 Creating an Image Analysis Setting From an AI Model [} 429]
2 Downloading AI Models [} 71]
2 Creating a New Image Analysis Setting [} 403]
2 Automatic Segmentation [} 944]
On the Processing tab you can apply image processing functions (IP functions) to acquired or
loaded images. The basic workflow is quite simple:
§ Select the desired processing function under Method, e.g. Color Balance.
You can search for processing functions in the Search field. Therefore, just enter the initials
of the functions you want to search.
§ Set the parameters of the function under Parameters.
If you need help for a specific function and its parameters press F1 on your keyboard. You
will find detailed descriptions for each functions in the online help.
§ To see how the functions works you can click on the Preview button under Image Parame-
ters > Output.
§ Click on the Apply button to apply the processing functions to the image.
This will create a new image in a new image container. The original image will not be
changed.
See also
2 Processing Tab [} 873]
In this topic we will show you how to extract individual fluorescence channel images of a multi-
channel image.
1. Select the Processing tab. Open the Method tool and select under Utilities the entry Cre-
ate Image Subset.
2. In the section Image Parameters, open the Input tool and select the multichannel image
as Input image.
3. In the section Method Parameters, open the Parameter tool and the select the entry
Channels. Deactivate the channels you do not want in the extracted image (e.g. the red
and the blue fluorescence channel).
Info
Activating Images for Processing
When you activate multiple images for processing in ZEN, make sure to exactly click on the re-
spective checkbox for the entry. If you click outside the checkbox, all activated images are des-
elected and only the last clicked entry is activated.
Batch processing in ZEN allows to apply a specific processing function automatically to a batch of
images. For each image you can use identical or different export settings.
10. Click Check All. All images in the list will be tagged with a green marker, if they have a
suitable setting.
11. Activate the checkbox for each image you want to have processed and click Apply.
All selected images in the list are processed and put into the defined output folder.
Info
Activating Images for Processing
When you activate multiple images for processing in ZEN, make sure to exactly click on the re-
spective checkbox for the entry. If you click outside the checkbox, all activated images are des-
elected and only the last clicked entry is activated.
This topic shows you how to export all images of a folder as batch. For each image you can use
identical or different export settings.
Prerequisite ü You have a folder with several CZI-images to be exported in a new image type (i.e. TIFF, BMP,
JPEG). For example, two 2channel time series images, one 2channel-Z-stack image, one
3Channel image.
1. Go to the Processing tab and click Batch.
à The Batch Processing opens.
2. Open the Batch Method tool and select Image Export.
3. In the Method Parameters section, open the Parameters tool.
4. In the Batch Processing view in the Center Screen Area, click + Add.
à A file explorer opens.
5. Select the folder, mark all images to be exported, and click Open.
6. Activate Use Input Folder as Output Folder to save the exported images in the folder of
the original images.
7. Click on one image in the list and set the export settings in the Parameters tool. In the ex-
ample the TIFF format is used for all time points and channels. This setting is only valid for
the selected image, but can be copied to further images in the list with identical dimen-
sions.
8. Click Copy Parameters. Select the image with identical dimensions in the list and click
Paste Parameters.
9. Continue with the other images of the list. Use Copy Parameters and Paste Parameters,
or define the export settings for each image individually in the Parameters tool.
10. Click Check All. All images in the list will be tagged with a green marker, if they have the
correct setting.
11. Activate the checkbox for each image you want to have exported and click Apply.
All images in the list are checked and exported in separate folders in the Input folder.
In this topic we will show you how to crop a region of interest (ROI) of an image.
6. Click in the image on the start position of the ROI and draw a rectangular region.
Fig. 10: Cropped ROI with the dimensions All channels and the Z-Position 11-20.
In this topic we will show you how to create an extended depth of focus (EDF) image of a Z-Stack
image. The focus planes of all the z-positions will be calculated to one EDF image.
2. In the Image Parameters section, open the Input tool and select the Z-Stack image as in-
put image.
Depending on your loaded image, you can split certain dimensions, e.g. channels or timepoints,
and save them to separate images. First, you select the method, and in the Parameters tools, you
specify the dimension. Each of the dimensions is only visible if the corresponding dimension is
present in the input image.
Create Image Subset and Split opens the resulting images in the ZEN document area. There-
fore, for this image processing function, Split Dimension is limited to a maximum of 20, e.g. time-
points, tiles, or slices. To split images along a dimension with more than 20 elements, use Create
Image Subset and Split (Write files), which creates image files in the specified folder.
Prerequisite ü You have loaded an image with more than one channel and not more than 20 timepoints.
1. Select Processing tab > Method tool, and select under Utilities the entry Create Image
Subset and Split or Create Image Subset and Split (Write files). When using the first
option, you can specify the output in the Output tool.
2. To create a subset from the available dimensions, make your settings in the sections Chan-
nels and Time.
3. In the Parameters tool, in the Split Dimension section, select the dimension along which
you want to split the image, e.g. Channel or Time.
In the Channels list, all channels of the loaded image are listed. Unselect channels you
want to exclude.
In the Time section, select the subset of timepoints which you want to include. If you have
an image with more than 20 timepoints, and you limit the range to below 20 elements,
Time will appear in Split Dimension allowing you to split the data set along this dimen-
sion (for Create Image Subset and Split).
In the Region section, select the region which you want to extract from the image, e.g.
Full or Rectangle Region.
Select Rectangle Region and draw a rectangle in the 2D view to specify the region you
want to extract.
When you use Create Image subset and Split (Write files), you also need to specify the
output folder for the images.
4. In the Input tool, specify whether you want to set the input automatically and which image
you want to display after processing. Check the according options.
5. In the Output tool, specify how to proceed with the new image. Select if you want to over-
write an existing image with the same name or if you want to create a new output. The
output tool is only available, if you selected the Create Image Subset and Split method.
6. Click the Apply button.
The new images are created according to your specifications, see Create Image Subset and Split
[} 194] or Create Image Subset and Split (Write files) [} 196].
The image processing functions Analyze to Label Image and Analyze to Label Image Batch
label images based on an existing image analysis setting and the parameters which you select
from the function. For general information about the image processing, see Image Processing
Workflow [} 72]. For a description of all the parameters of Analyze to Label Image, see Ana-
lyze to Label Image [} 155] and for Analyze to Label Image Batch, see Analyze to Label Im-
age Batch [} 157].
The image analysis with these functions offers a wide variety of parameter options and the out-
come completely depends on the combination of them. Therefore, this chapter can only give an
example to illustrate the use and outcome of the function. In general, the labeling is possible as
regions, contours, and double contours. For all three the same options are given in Label Mode.
As an example, the following two channel image and image analysis setting with three classes are
used to illustrate the functions:
Labelling the other two classes with the option One channel per class additionally creates the
output for the other two classes:
If you want the regions to have the same color as defined in the image analysis setting, you can,
for example, choose the option Region - Region Class Color and 24 Bit RGB as Pixel Type:
As stated before, you can choose the same options (labeling with pixel type maximum and region
class color) for contours or double contours as well.
Parameter Description
Settings In the Settings section at the top of the Parameter tool you are able
to save and reload the adjusted settings.
If you have adjusted the parameters for a function, simply click on
Options > New to save your setting under a new name.
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
Third Dimension Only visible, if there is a third dimension in the input image and/or
Show all mode is activated.
Parameter Description
Here you can select how you want the function to work in the case of
multidimensional images.
Adjust per Channel Only available for images with multiple channels and if C is not se-
lected as Third Dimension.
Activated: Opens a list with the channels to allow an individual ad-
justment of each channel. For every channel you have the following
options:
– Skip Channel Skips this channel when processing. This channel will not be in the
output image.
– Copy Channel This channel is copied into the output image without a reduction of
noise.
Reset Settings to You can reset the settings for a function to default by clicking on the
Default Defaults button.
3.11.8.2 Adjust
With this function you remove smooth background or correct uneven illumination. The implemen-
tation is adapted from the corresponding function in ImageJ and based on the “rolling ball” algo-
rithm (S. Sternberg, “Biomedical Image Processing”, IEEE Computer, January 1983).
Parameters
Parameter Description
Radius Here you adjust the radius of the rolling ball in pixels. This value
should be at least as large as the radius of the largest object in the im-
age that is not part of the background. Larger values will also work
unless the background of the image is too uneven.
Create Background If activated, an image which contains the detected background only is
only created. Use this image to subsequently perform manual corrections
of the image background, e.g. using the image calculator function.
Light Background Use this option if your image contains bright background and dark
objects.
3.11.8.2.2 Brightness/Contrast/Gamma
This method allows you to adjust the brightness, contrast, and gamma value of an image.
Info
Unlike the adjustments that can be made on the Display tab, here the pixel values of the im-
age are changed.
Parameters
Parameter Description
Brightness Adjust the desired brightness using the slider or input field.
Changing the brightness means that each gray or color value is in-
creased or decreased by the same value. The difference between the
biggest and smallest gray or color value in the image remains the
same, however.
Contrast Adjust the desired contrast using the slider or input field.
Changing the contrast means that the gray or color values are multi-
plied by a factor. The difference between the biggest and smallest
gray or color value changes.
Gamma Adjust the desired gamma value using the slider or input field.
Changing the gamma value means that the gray or color values are
multiplied by individual factors.
This method allows you to adjust the weighting of the individual color channels of a true color im-
age.
Parameters
Parameter Description
Range to Adjust Here you can select the adjustment range for the color balance. There
are 3 ranges available:
Cyan - Red Adjust the desired color balance using the slider or input field.
Yellow - Blue Adjust the desired color balance using the slider or input field.
Magenta - Green Adjust the desired color balance using the slider or input field.
This method allows you to adjust the color temperature of a true color image. Therefore, use the
Temperature Delta slider. A description of the slider can be found under White Balance [} 90].
This function enhances the contrast by linearizing the histogram of the image to equal area frac-
tions in the histogram. The areas (pixel count multiplied by gray value range) of all gray values in
the histogram of the result image are the same.
Parameter Description
All Z If activated, the function is applied to all Z planes.
High Threshold The fraction of pixels that will be mapped to the highest gray value of
the output image.
Lower Threshold The fraction of pixels that will be mapped to gray value 0.
3.11.8.2.6 Hue/Saturation/Lightness
This method allows you to adjust the hue, saturation, and brightness of a true color image.
Parameters
Parameter Description
Hue The value of the shift represents an angle on the color wheel. The val-
ues -180 and +180 therefore have an identical effect. Negative angles
shift the color tone towards blue and positive ones shift it towards
red.
Adjust the desired shift in the color tone using the slider or input field.
Saturation Saturation describes how intense the color of a pixel is. "Chromatic" is
the maximum saturation, while "achromatic" describes colors that do
not leave a color impression.
Adjust the desired saturation using the slider or input field.
Lightness Lightness describes how light or dark a pixel appears. The greatest
difference is between black and white or between violet and yellow.
Adjust the desired brightness using the slider or input field.
Rolling ball performs a background correction based on the work of Sternberg, Stanley R. “Bio-
medical image processing.” Computer 1 (1983): 22-34. It determines the background in an image
locally which allows this to be used on unevenly illuminated images as opposed to using a global
background value. The size of the ball should be at least as large or exceed the size of the largest
object, which is not part of the background.
Parameter Description
Radius Sets the radius of the rolling ball creating the background in pixels.
Background Only Activated: Only creates a background, but does not subtract it.
Parameter Description
The following parameters are only visible, if you have a CUDA capable GPU.
GPU Activated: Uses the GPU to perform the rolling ball background cor-
rection.
This method allows you to improve images in which the quality has been impaired by uneven illu-
mination or vignetting.
If you want to perform shading correction before an experiment (recommended), you have to use
the shading correction function in the Camera tool, in the Post-Processing section [} 882].
Parameter Description
Shading Mode Selects which shading correction should be applied.
- Channel Spe- For multichannel references, the information from each channel is ap-
cific plied individually to the input image. The number and type of chan-
nels must be the same between input image and shading reference
image. This is the correct setting when using a reference image which
was created by the Shading Reference for Processing function
(e.g. for Axioscan images).
- Tile Specific In case a shading reference has more than one tile, the tiles of the ref-
erence and the tiles of the image that should be corrected are
mapped to each other. The tiles must be identical in number and XY
position. This can be the case, for example, when using an Axiozoom
where shading is primarily determined by the sample mounting situa-
tion rather than the optical system.
Shading Reference Only available in Batch mode and if Global Shading is selected, or
Camera Shading is selected and Automatic is deactivated.
Enables you to select a reference image for the shading correction.
Display Mode
- Additive In this mode the normalized reference image is subtracted from each
camera frame. This influences the brightness of the image.
- Multiplicative In this mode each camera frame is divided by the normalized refer-
ence image. This influences the contrast of the image. This is the de-
fault setting.
The simulated/auto reference image is created by averaging up to 20
camera frames in the input image and running a lowpass filter on
them.
Parameter Description
Offset Adjust the gray value that will be added on to the newly calculated
gray values using the slider or input field. If this results in negative val-
ues, these are set to 0. Values that exceed the maximum gray value
are set to the maximum gray value.
See also
2 General Settings [} 83]
With this method you can create shading reference images for multi-channel tile images. It col-
lects the intensity information from many tiles and creates an averaged image representing the
shading influences of the system on the image. Note: Z-stack and time series images cannot be
processed with this function! Best results are achieved for images containing more than 200 tiles
per channel. The software creates the shading reference image and stores it directly to the Cali-
bration Manager (Shading Reference).
Parameters Description
All Scenes Only available for multi-scene images.
Selects all scenes for processing.
Selected Scene Only available for multi-scene images and if All Scenes is deactivated.
Selects the scene used for processing.
Adjust per Channel Only available for images with multiple channels.
Activated: You can adjust the settings for every channel separately.
To use a channel, select Process Channel or select Skip Channel if
you do not want a channel to be processed.
The settings Save directly as Shading Reference and Channel-spe-
cific are applied separately for each channel. If you want to use the
same settings for all channels, deactivate the checkbox.
Parameters Description
Auto Adjust Inten- Activated: Automatically calculates the multiplication factor based on
sity the gray values of the image and the needed gray values for using it
in the shading reference calibration manager.
If activated, the setting for Multiply Factor has no influence anymore
on the image generation.
Multiply Factor Only available if is Auto Adjust Intensity deactivated. Sets a multipli-
cation factor, i.e. the software will multiply the pixel intensity for each
pixel of the shading reference image by this value.
If you use an own sample, it is mostly the case that the images are
very dim and the intensity does not reach the value needed to be
used within the shading reference calibration manager, thus it will be
rejected.
Apply Gaussian Fil- Activated: A Gaussian filter is applied after the averaging of the field
ter of views from a tiled image is done. This enables to smooth the shad-
ing reference image.
Use this filter and the Sigma factor very carefully as it could remove
also features which are real shading structures. This feature could be
recommended if the number of tiles in the scanned image is low and
cannot be increased for certain reasons.
With this method you can create shading reference images for multi-channel tile images. It col-
lects the intensity information from many tiles and creates an averaged image representing the
shading influences of the system on the image. Note: Z-stack and time series images cannot be
processed with this function! Best results are achieved for images containing more than 200 tiles
per channel.
Parameters Description
All Scenes Only available for multi-scene images.
Selects all scenes for processing.
Selected Scene Only available for multi-scene images and if All Scenes is deactivated.
Selects the scene used for processing.
Adjust per Channel Only available for images with multiple channels.
Activated: You can adjust the settings for every channel separately.
To use a channel, select Process Channel or select Skip Channel if
you do not want a channel to be processed.
The settings Save directly as Shading Reference and Channel-spe-
cific are applied separately for each channel. If you want to use the
same settings for all channels, deactivate the checkbox.
Parameters Description
correction:
Contrasting method and condenser, fluorescence filter, magnification:
Objective and Optovar; camera bit depth and RGB/BW mode, camera
type and port position.
Deactivated: The system creates an All Channel calibration and per-
form an objective specific shading correction. The following compo-
nents will be included in the shading reference and checked before
applying shading correction:
Magnification (Objective and Optovar); Camera bit depth and RGB/
BW mode, and camera type and port position.
Multiply Factor Only available if is Auto Adjust Intensity deactivated. Sets a multipli-
cation factor, i.e. the software will multiply the pixel intensity for each
pixel of the shading reference image by this value.
Apply Gaussian Fil- Activated: A Gaussian filter is applied after the averaging of the field
ter of views from a tiled image is done. This enables to smooth the shad-
ing reference image.
Use this filter and the Sigma factor very carefully as it could remove
also features which are real shading structures. This feature could be
recommended if the number of tiles in the scanned image is low and
cannot be increased for certain reasons.
This method allows you to improve the quality of z-stack images that have been affected by
bleaching effects during acquisition. Under Correction, you can select the desired correction
mode or a combination of the modes.
Parameter Description
Decay This mode compensates the bleaching effect.
See also
2 General Settings [} 83]
Parameters
Parameter Description
Automatic Activated: The white spot is calculated automatically from the image
data.
Temperature Delta Adjust the delta that will be added on to the newly calculated color
values. Negative values reduce the color temperature, while positive
values increase it. A value of 1 corresponds to 10 Kelvin.
Parameter Description
Channel Selects the channel where the PSF should be attached.
PSF File Selects the PSF-file created by the image processing function Cre-
ate PSF.
Parameter Description
X Scaling Changes the scaling of the image in X-direction.
The following units are
possible:
- cm/px
- mm/px
- µm/px
- nm/px
- pm/px
- i/px
- mil/px
Parameter Description
Y Scaling Changes the scaling of the image in Y-direction.
The following units are
possible:
- cm/px
- mm/px
- µm/px
- nm/px
- pm/px
- i/px
- mil/px
- cm/px
- mm/px
- µm/px
- nm/px
- pm/px
- i/px
- mil/px
Pick scaling from file Applies the scaling from another czi-file.
Parameter Description
Distance from docu- Sets the distance of the Scale Bar from the bounds of the image
ment bounds (in pixel).
- Top Left
- Top Center
- Top Right
Parameter Description
- Middle Left
- Middle Center
- Middle Right
- Bottom Left
- Bottom Center
- Bottom Right
- Horizontal
- Vertical
Parameter Description
Split Mode Chooses the mode how to split the multiblock image (in pixel).
- Homogeneous Splits the multiblock image into the single dimensions. The
groups blocks will remain.
Target Folder Specifies the folder where the split images are stored. The path
of the destination folder is displayed automatically in the display
field. To change the folder, click on the button to the right of the
display field.
3.11.8.4 Deconvolution
The Airyscan Joint Deconvolution method provides an alternative processing of Airyscan SR data
for even higher resolution. An unprocessed Airyscan SR file needs to be selected as an input file.
Airyscan Joint Deconvolution enables you to adjust the strength of the processing by defining the
number of iterations. Note that overdoing the processing strength can result in artificially small
structures, or splitting of solid structures.
Parameter Description
Start with Last Re- Activated: Keeps the iteration result and adds additional iterations in-
sult stead of starting the processing from the beginning again.
Sample Structure Sets the default value for typical sample types.
– Sparse Sets the default value for a sample with small structures and lots of
black background (e.g. beads).
– Dense Sets the default value for a sample with lots of structures and little
background.
Maximum Itera- Sets the number of maximum iterations. This input is connected to
tions the Quality Threshold parameter. If the Quality Threshold is set to
0, the number of iterations are processed as set here. If a threshold
greater 0 is defined, the set number of iterations will not always be
used, but processing stops if a certain resolution quality is achieved.
The number of actual iterations is documented in the image Info
view.
Quality Threshold Sets the threshold for quality. This input is connected to the Maxi-
mum Iterations parameter.
See also
2 Using Direct Processing [} 230]
3.11.8.4.2 Deblurring
Deblurring is a 2D background removal function based on the nearest neighbor algorithm. In the
nearest neighbor method, the input is a 3D image where an in-focus slice is enhanced by sub-
tracting its convolved neighbor(s). In computational clearing, the input is a 2D image. The neigh-
bor is approximated using the 2D out-of-focus PSF from the microscope. It is then subtracted
from the 2D input image the same way as the nearest neighbor algorithm.
Parameter Description
Normalization Here you can select how the gray/color values that exceed or fall
short of the value range should be dealt with. If you use this method
with Direct Processing, only the Clip method is available and prese-
lected.
– Automatic Automatically sets the gray levels that exceed or fall short of the pre-
defined gray value range to the lowest or highest gray value (black or
white). The effect corresponds to underexposure or overexposure. In
certain circumstances some information may therefore be lost.
Strength Sets the strength of the out of focus neighbor that is being sub-
tracted. A higher value results in less background.
Parameter Description
Sharpness Sets the sharpness which is inversely related to the regularization
strength. A higher value results in a sharper image. This parameter is
only valid for the final deconvolution that is done for further refine-
ment of the subtracted image.
Blur Radius Sets the PSF radius in pixels. The value of radius is proportional to the
focal distance. A bigger radius results in less background.
See also
2 Using Direct Processing [} 230]
2 General Settings [} 83]
Deconvolution can be done either by using the computers CPU or by using a graphics card. Using
a graphics card can speed up deconvolution processing quite dramatically. A NVIDIA graphics
card is required which supports the CUDA processing library. Contact your sales representative for
further details about supported graphic card models.
For a detailed overview of all deconvolution methods (and its combinations), see Deconvolution
Methods in ZEN [} 590]. This method allows you to use and individually configure 4 different al-
gorithms for deconvolution (short DCV). Two tabs are available for detailed configuration:
§ On the Deconvolution tab, you can select the desired algorithm and define the precise set-
tings for it, see Deconvolution Tab [} 95].
§ On the PSF Settings tab, you can see and change all key parameters for either generating a
theoretically calculated PSF, or selecting an experimentally measured PSF, see PSF Settings
Tab [} 99].
See also
2 Performing Configurable Deconvolution [} 595]
Info
Expert knowledge is required for some of the settings. If you are in doubt, leave the settings
unchanged.
Parameter Description
Algorithm Select the deconvolution algorithm. The following algorithms are
available:
§ Nearest Neighbor
§ Regularized Inverse Filter
§ Fast Iterative
§ Constrained Iterative
Parameter Description
Enable Channel Se- Activated: Applies the settings on a channel specific basis. This al-
lection lows you to set parameters for each channel individually. A separate,
colored tab for each of the channels is displayed.
Deactivated: Applies the same settings to all channels of a multi-
channel image.
Normalization Specifies how the data of the resulting image is handled if the gray/
color levels exceed or fall short of the value range.
– Clip Clips the values that exceed or fall short of the value range. Sets neg-
ative values to 0 (black). If the values exceed the maximum possible
gray value of 65636 when the calculation is performed, they are lim-
ited to 65636 (pixel is 100% white).
Results from different input images can be quantitatively compared
with each other.
– Automatic Normalizes the output image automatically. In this case the lowest
value is 0 and the highest value is the maximum possible gray value in
the image (gray value of 65636). The maximum available gray value
range is always utilized fully in the resulting image.
Results from different input images cannot directly be compared
quantitatively with each other.
Set Strength Man- For the Nearest Neighbor and the Fast Iterative algorithm, the
ually checkbox is always activated.
Activated: Sets the desired degree of restoration with the slider. To
achieve strong restoration and best contrast, move the slider towards
Strong. To achieve lower restoration but smoother results, move the
slider towards Weak. If the setting is too strong, image noise may be
intensified and other artifacts, such as "ringing", may appear.
Deactivated: Determines the restoration strength for optimum image
quality automatically. This is recommended for widefield and confocal
images and is therefore deactivated by default.
The restoration strength is inversely proportional to the strength of
so-called regularization. This is determined automatically with the
help of Generalized Cross Validation (GCV).
Parameter Description
Corrections
To display parameters for image correction, click .
– Background Activated: Analyzes the background component in the image and re-
moves it before the deconvolution calculation. This can prevent back-
ground noise being intensified during deconvolution.
– Bad Pixel Cor- Activated: Employs a fully automatic detection and removal of spuri-
rection ous or hot pixels (also known as stuck pixels) in an image stack which
might interfere with the deconvolution result.
It is based on the analysis of the gray level variance in the neighbor-
hood of each pixel in the image. It is recommended to use this param-
eter only, if stuck pixels are observed in the input image.
Execution Options Only visible if Enable Channel Selection is activated. Enables you to
select how the respective channel is handled during processing.
– Copy This channel is copied into the output image without processing.
– Skip Skips the channel when processing. The respective channel will not be
in the output image.
This section is only visible if you have selected the Regularized Inverse Filter, Fast Iterative or
Constrained Iterative algorithm. To display advanced settings, click .
Parameter Description
Likelihood Visible for Fast Iterative and Constrained Iterative algorithms.
Selects which likelihood calculation you want to work with.
Parameter Description
– Gauss Only visible for the Constrained Iterative algorithm.
Computation assuming a Gaussian noise distribution. If detector
noise is dominant over sample noise, using a Gaussian noise model
can be advantageous, however, this is rarely the case with modern
microscopy systems.
Regularization Selects which frequencies in the image are taken into account during
regularization.
– None Only visible for the Fast Iterative and Constrained Iterative algo-
rithms.
No regularization is performed.
– First Order Only visible for the Regularized Inverse Filter and Constrained It-
erative algorithms.
Regularization based on Good's roughness. Under certain circum-
stances, more details are extracted from noisy data. It may be better
suited to the processing of confocal data sets.
First Estimate Visible for Fast Iterative and Constrained Iterative algorithms.
– Input Image The input image is used as the first estimate of the target structure
(default).
Parameter Description
– Last Result The result of the last calculation is used to estimate the next calcula-
Image tion. This can speed up a calculation that is repeated using slightly dif-
ferent parameters.
– Mean of Input No estimate is made, the mean gray level of the input image is being
used. This is the most rigid application of deconvolution. It should be
chosen for confocal images, where the data sampling can be quite
sparse. The computation time will increase, but missing information
can be recovered from the PSF.
Maximum Itera- Visible for Fast Iterative and Constrained Iterative algorithms.
tions Sets the maximum permitted number of desired iterations. In the case
of Richardson-Lucy, you should allow significantly more iterations
here.
Quality Threshold Only visible for the Fast Iterative and Constrained Iterative algo-
rithms.
Defines the quality level at which you want the calculation to be
stopped. The percentage describes the difference in enhancement be-
tween the last and next-to-last iteration compared with the greatest
difference since the start of the calculation. 1% is the default value.
Lowering this can bring about small improvements in quality.
GPU Acceleration Only visible if a suitable (NVIDIA, CUDA based) graphics card is in-
stalled in your PC. The checkbox is then activated by default.
Activated: Uses GPU processing.
Deactivated: Uses CPU processing.
GPU Tiling Only available for very large images that exceed the available graphic
card memory.
Activated: With this function the image is split up in smaller portions
which fit into the memory of the graphic card. The function automati-
cally determines into how many tiles the image must be split to allow
maximum usage of the graphics card. The resulting tiles are automati-
cally stitched together for the final output result.
Deactivated: No tiling is performed, however, in this case only cer-
tain sub-functions of deconvolution can run on the graphics card and
the speed increase compared to CPU processing will be lower. The
image quality might be higher than with tiling because there is no
need for stitching.
All key parameters for generating a theoretically calculated Point Spread Function (PSF) are dis-
played on this tab.
Info
Usually, images (with file type *.czi) that have been acquired with ZEN automatically contain
all microscope parameters, meaning that you do not have to configure any settings on this
tab. Therefore, most parameters are grayed out in the display. It is possible, however, that as a
result of an incorrect microscope configuration values may not be present or may be incorrect.
You can change them here. The correction of spherical aberration can also be set here.
The most important microscope parameters for PSF generation that are not channel-specific are
displayed in this section.
Info
If you enter incorrect values, this can lead to incorrect calculations. If the values here are obvi-
ously wrong or values are missing, check the configuration of your microscope system.
Parameter Description
Microscope Displays which type of microscope has been used. There are two main
options: conventional microscope (also known as a widefield micro-
scope) and confocal microscope, for which the additional pinhole di-
ameter parameter applies.
Immersion Displays the refractive index of the immersion medium. Note that this
can never be smaller than the numerical aperture of the objective.
You can make a selection from typical immersion media in the drop-
down list next to the input field.
Override To change the input fields that are normally grayed out, click on the
button. The input fields and drop-down lists are now active. The text
on the button changes to Reset. To restore the original values saved
in the image, click Reset.
Master Reset Resets the metadata to the values which were originally stored in the
image at time of acquisition. It reverts any changes made by clicking
Override.
Parameter Description
Phase Ring If you have acquired a fluorescence image using a phase contrast ob-
jective, the phase ring present in the objective is entered here. This
setting has significant effects on the theoretical Point Spread Func-
tion (PSF).
Parameter Description
– Scalar Theory The wave vectors of the light are interpreted as electrical field = inten-
sity and simply added. This method is fast and is sufficient in most
cases (default setting).
– Vectorial The- The wave vectors are added geometrically. However, the calculation
ory takes considerably longer.
Z-Stack This field can only be changed if it was not possible to define this pa-
rameter during acquisition, e.g. because the microscope type was un-
known. It describes the direction in which the z-stack was acquired.
Note that this setting is only relevant, if you are using the spherical
aberration correction.
Parameter Description
Enable Correction Activated: Uses the correction function. All options are active and
can be edited.
Refractive Index Displays the refractive index of the selected embedding medium. En-
ter the appropriate refractive index if you are using a different embed-
ding medium.
Parameter Description
Distance to Cover Displays the distance of the acquired structure from the side of the
Slip cover slip facing the embedding medium. Half the height of the z-
stack is assumed as the initial value for the distance from the cover
slip. The value can be corrected if this distance is known. If possible,
this distance should be measured.
Note: Use Ortho View and the Distance Measurement option to
define the distance of the sample to the coverslip. It is also important
to estimate the position of the glass/embedding medium interface as
precise as possible. If the z-stack extends into the coverslip, the deter-
mined range of the stack which reaches into the glass should be en-
tered as a negative value. Example: Z-stack is 26 µm thick, glass/
medium interface is positioned at 9 µm distance from the first plane
of the stack. Resulting value for Distance to cover slip: - 9.0 µm.
Cover Slip Type Commercially available cover slips are divided into different groups
depending on their thickness (0, 1, 1.5 and 2), which you can select
from the dropdown list. Cover slips of the 1.5 type have an average
thickness of 170 µm. In some cases, however, the actual values can
vary greatly depending on the manufacturer. For best results the use
of cover slips with a guaranteed thickness of 170 µm is recom-
mended. Values that deviate from this can be entered directly in the
input field.
Cover Slip Ref. In- Selects the material that the cover slip is made of. The corresponding
dex refractive index is displayed in the input field next to it.
Working Distance Displays the working distance of the objective (i.e. the distance be-
tween the front lens and the side of the cover slip facing the objec-
tive). The working distance of the objective is determined automati-
cally from the objective information, provided that the objective was
selected correctly in the MTB 2011 Configuration program. You can,
however, also enter the value manually.
In this section you find all settings that are channel specific. This means that they can be config-
ured differently for each channel.
Parameter Description
Use External PSF Activated: Uses an external measured PSF. You find an additional in-
put window under Image Parameters > Input where you can select
the external PSF file. The software checks if the PSF's microscope pa-
rameters match with the input image. Deviations (10nm deviation in
wavelength will be accepted) make the software use a theoretical
PSF.
Attach to Input If an external PSF was selected, you can attach the file to the input
image. The saved input image then contains the correct measured
PSF. Usage of a theoretical PSF is possible as well for such an image.
Just deactivate the Use External PSF.
Illumination Displays the excitation wavelength for the channel dye [in nm] by us-
ing the peak value of the emission spectrum. The color field corre-
sponds to the wavelength (as far as possible).
Detection Displays the peak value of the emission wavelength for the channel
dye. The color corresponds to the wavelength (as far as possible).
Sampling Lateral Depends on the geometric pixel scaling in the X/Y direction and dis-
plays the extent of the oversampling according to the Nyquist crite-
rion. The value should be close to 2 or greater in order to achieve
good results during DCV. As, in the case of widefield microscopes,
this value is generally determined by the objective, the camera
adapter used and the camera itself, it can only be influenced by the
use of an Optovar. With confocal systems, the zoom can be set to
match this criterion.
Sampling Axial Depends on the geometric pixel scaling in the Z direction and displays
the extent of the oversampling according to the Nyquist criterion.
The value should be at least 2 or greater in order to achieve good re-
sults during DCV. This value is determined by the increment of the fo-
cus drive during acquisition of Z-stacks and can therefore be changed
easily.
Pinhole Only available if a confocal microscope has been entered under the
microscope parameters.
Displays the size of the confocal pinhole in Airy units (AU).
Displays advanced microscope information that influences the form of the PSF in a channel-de-
pendent way:
Parameter Description
Illumination Selects the illumination method with which the data set has been ac-
quired. In the event that a Conventional Microscope has been en-
tered under the microscope parameters, the following options are
available here: Epifluorescence, Multiphoton Excitation and
Transmitted Light. In the case of confocal microscopes, Epifluores-
cence is the only option.
Image Formation Displays whether the imaging was incoherent (Conventional Micro-
scope) or coherent (Laser Scanning Microscope).
Axial FWHM Displays the FWHM (Full Width Half Maximum) as a measure of the
axial resolution of the PSF.
This section shows you the PSF that is calculated for a channel based on the current settings. If
you select the Auto Update checkbox, all changes made to the PSF parameters are applied im-
mediately to the PSF view. This makes it possible to check quickly whether the settings made
meet your expectations. You can extract the PSF from the image via right-click menu (PSF Snap-
shot) which opens the resulting new PSF document in the center screen area.
This method allows you to use four different algorithms for deconvolution, without any further
settings. The following algorithms are provided in the Parameter tool:
Parameter Description
Simple, very fast (Near- Only available, if a z-stack is selected as input image.
est Neighbor) Executes the fast Nearest Neighbor method using default pa-
rameters.
Better, fast (Regular- Executes the Regularized Inverse Filter algorithm for image en-
ized Inverse Filter) hancement.
Excellent, slow (Con- Only available if you have licensed the Deconvolution function-
strained Iterative) ality.
Executes the Constrained Iterative quantitative restoration
method.
See also
2 Performing Deconvolution Using Default Values [} 593]
Parameter Description
Use Wizard Activated: Opens the Measured PSF Wizard when applying the
function to create a PSF.
Bad Pixel Correc- Employs a fully automatic detection and removal of spurious or hot
tion pixels (also known as stuck pixels) in an image stack which might in-
terfere with the PSF extraction procedure. It is based on the analysis
of the gray level variance in the neighborhood of each pixel in the im-
age. It is usually recommended to leave this parameter active.
See also
2 Creating a PSF - With Wizard and Without [} 602]
3.11.8.5 EM Processing
This group lists image processing functions that are designed for processing FIB-SEM images.
Some of the functions can also be found in other groups of image processing functions. This
group is only visible if you have the license for EM Processing Toolbox.
See also
2 EM Processing Toolbox [} 793]
This method allows you to change the pixel type of an image. This can be useful if you want to
compare or combine images that have different pixel types.
Parameters
Parameter Description
Pixel Format Select the desired pixel format from the dropdown list.
Parameter Description
- 8 Bit B/W The output image is a monochrome image, the whole-number gray
values of which can lie in the range from 0 to 255.
- 16 Bit B/W The output image is a monochrome image, the whole-number gray
values of which can lie in the range from 0 to 65535.
- 32 Bit B/W Float The output image is a monochrome image with real numbers as pixel
values.
- 2x32 Bit Com- The output image is a monochrome image with complex numbers
plex (real part and imaginary part) as pixel values. Such images are gener-
ally created by means of transformation into the Fourier space.
- 24 Bit RGB The output image is a color image, the whole-number color values of
which in the red, green, and blue channels can lie in the range from 0
to 255.
- 48 Bit RGB The output image is a color image, the whole-number color values of
which in the red, green, and blue channels can lie in the range from 0
to 65535.
- 2x32 Bit RGB The output image is a color image with real numbers as color values
Float in the red, green and blue channels.
- 3x64 Bit RGB The output image is a color image with complex numbers (real part
Complex and imaginary part) in the red, green and blue channels. Such images
are generally created by means of transformation into the Fourier
space.
Parameter Description
Z-Alignment Setup The Setup button opens the Coarse Z-Stack Alignment setup to
manually align your z-stack.
Shift List Displays a list with all the shifts that have been made in the Coarse Z-
Stack Alignment setup.
See also
2 Aligning Z-Planes Manually [} 794]
With this setup you can manually align the planes of a z-stack. If you have imported your z-stack
and there are bigger shifts between individual planes of the stack, this setup helps you to make a
coarse alignment of your z-stack before an automated alignment, e.g. via Z-Stack Alignment
with ROI.
1 2
2 Image View
Displays your selected z-plane and the following plane with different colors. The current
plane is displayed in cyan and the following z-plane is displayed in red.
3 View Options
Here you have your standard view options (Dimensions Tab [} 1029] and Display Tab
[} 1043]).
See also
2 Aligning Z-Planes Manually [} 794]
This section of the Coarse Z-Stack Alignment setup contains the main controls to align the z-
planes of your stack manually.
Parameter Description
Speed Selects the speed/ number of pixels by which the z-planes are shifted
per click.
Parameter Description
– Shifts the z-planes by 1000 pixels per click.
Arrow Buttons Shifts the next and all following z-planes in the corresponding direc-
tion.
Visibility Weight Changes the color intensity of the current plane and the following z-
slider planes in the Image View. In extreme slider positions, only the cur-
rent z-plane or only the following z-planes are displayed.
Shift List Displays a list with all the shifts in x and y direction that you defined in
the setup.
This method allows you to extract parts from one image and use these to create a new image.
You can select these parts freely from the individual dimensions of the image. Each of the param-
eter sections is only visible if the corresponding dimension is present in the input image.
Info
Image Analysis Results
Note that if your image contains analysis results, the analysis results are deleted when you exe-
cute this function.
Parameter Description
Channels Selects which channels of the input image are used. All channels are
selected by default. To deselect a channel, click on the respective
channel button.
Z-Position, Time, Here you can select which parts of the input image you want to use
Block, Scene for the resulting image.
- Extract All If selected, all parts of the corresponding image are extracted.
- Extract Range If selected, you can select a certain range of images to be extracted.
- Extract Multi- If selected, you can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Parameter Description
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section 10, followed by sections 14 to 18, as well as
sections 20 and 23.
- Get current Adopts the position from the current display in the image area.
position
- Interval Activated: Interval mode is active. The Interval spin box/input field
appears.
Enter the desired interval here. E.g. if you enter the value 2 only every
2nd value from the range is considered.
Region Here you can select if you want to use the entire image or just a re-
gion (ROI) of the input image.
- Full Select this option to use the full image for the new image.
- Rectangle re- Select this option to draw in a rectangle region of interest, which will
gion (ROI) be used for creating a new image.
If a rectangle region was drawn in, you can see and change its coordi-
nates by editing the X/Y/W/H input fields.
This function allows you to define a region of interest in your z-stack and cut it out of the stack as
a separate volume.
Parameter Description
Define Regions Opens the Define Regions setup [} 110] to define the region of inter-
est to be cut out of your z-stack.
See also
2 Cutting Out a Volume from a Z-Stack [} 796]
1 2
2 Image View
Here you can draw your support regions into the current z-plane and edit them if
needed.
3 View Options
Here you have some standard view options as well as the Support Regions Tab [} 111].
See also
2 Cutting Out a Volume from a Z-Stack [} 796]
With this tools you can draw regions into your image and edit them.
Parameter Description
Draw Draws a new region into the image.
Erase Using this button, you can erase parts of a region. Holding down the
left mouse button, outline the parts of the region that you want to
erase. Right-click to erase the parts.
Merge Use this button to connect regions. Holding down the left mouse but-
ton to extend the outline of an existing region or draw a connection
between the regions that you want to merge. Right-click to merge
them.
Parameter Description
Fill Fills a hole. To fill a hole, left-click on the hole.
– Define Single In this mode you should only define a single region on each support
Region slice of the image. For the interpolation, the support regions on the
different slices do not necessarily have to overlap with one another.
– Define Multiple In this mode you can define multiple regions on each support slice of
Regions the image. Interpolation here is only possible, if the support regions
on each slice have some overlap with the respective regions on the
previous/next support slice.
Interpolate Interpolates regions in the slices between the drawn support regions
and displays the interpolated regions in the image.
Clear All Removes all regions (drawn support regions and interpolated ones) in
the entire stack.
See also
2 Cutting Out a Volume from a Z-Stack [} 796]
Parameter Description
Show Objects Activated: Displays the regions in the Image View.
Deactivated: Hides the regions in the Image View.
Refresh Support Updates the list below with the drawn support regions.
Slice List
Parameter Description
Support Slice List Displays a list with information on the Position(s) where support re-
gions have been drawn and the number of regions and holes on the
respective slice.
This list is only updated when you click on Refresh Support Slice
List.
3.11.8.5.5 Denoise
This method removes noise from images using wavelet transformations or total variation. The
process of denoising an image with wavelet transformations can be broken down into the follow-
ing three parts:
Parameter Description
Method
- Complex The Dual Tree Complex Wavelet transform provides better results due
wavelets to the fact that it is nearly direction invariant and makes more direc-
tional sub bands available. The results will be less prone to block-arte-
facts. However, this method is computationally more intense and
therefore takes longer.
- Real wavelets The real wavelet transform only considers three sides (XYZ) and is
therefore faster. However, the result can show block artefacts.
- Total variation An algorithm based on A. Chambolle, "An Algorithm for Total Varia-
tion Minimization and Applications", J. Math. Imaging and Vision 20
(1-2): 89-97, 2004.
It uses the L1 norm for optimization. Typically, total variation gener-
ates small plateaus with constant gray values. The size of each of the
small plateaus depends on the Strength setting.
Strength Here you adjust the strength with which the function is applied.
See also
2 Using Direct Processing [} 230]
With this function you can improve the contrast of your image.
Parameter Description
Clip Limit (%) Defines the value at which the histogram is clipped. It is used to avoid
oversaturation of the image in homogeneous areas. It is determined
by the normalized histogram of the local region. The higher the value
is set, the higher the contrast will be.
Region Size (%) Defines the size of every single region or tile where the local his-
togram is calculated as percentage of the image size. For example, a
region size of 50% has the size of half the image.
A small region size increases the contrast but takes more calculation
time.
Bins Defines the number of histogram bins that is used to create the con-
trast enhancing transformation. A higher number of bins covers a
greater dynamic range but increases calculation time.
Info
Difference between Result and Preview
The actual result of the function and the preview you see in the preview window can be differ-
ent. This is caused by the parameter Region Size (%) and the different size of the preview
and original image. The function splits the image into small regions and processes them indi-
vidually based on the histogram of every single region. The histogram is usually related to the
absolute region size. For the preview only the visible part of the image in the view is pro-
cessed.
3.11.8.5.7 Gauss
This method allows you to reduce noise in an image. Each pixel is replaced by a weighted average
of its neighbors. The neighboring pixels are weighted in accordance with a two-dimensional
Gauss bell curve.
Parameter Description
Sigma Here you can adjust the sigma value. If the Show All mode is acti-
vated, you can adjust the values in each dimension individually.
3.11.8.5.8 Highpass
This method performs high-pass filtering. The high pass filter is defined as the difference between
the original image and the low-pass filtered original.
Parameter Description
Normalization Depending on the image processing function you have selected not
all choices are available in the list.
- Clip Gray levels that exceed or fall below the specified gray value range
are automatically set to the lowest/highest gray value (black or white).
The effect corresponds to underexposure or overexposure. This means
that in some cases information is lost.
Parameter Description
- Automatic Automatic normalization of gray values to the available gray value
range.
- Wrap If the result is greater than the maximum gray value of the image, the
value maximum gray value +1 is subtracted from it.
- Shift Normalizes the output to the value gray value + max. gray value/2.
Count Here you set the number of repetitions. I.e. the number of times the
function is applied sequentially to the respective result of the filtering.
The effect is increased correspondingly.
Kernel Size You can set the filter size in the x-, y- and z-direction, symmetrically
around the subject pixel. This should be the size of the transition re-
gion between objects and background match.
This function allows you to import SmartFIB stacks of Crossbeam microscopes into ZEN.
Parameter Description
Select Files Opens a file browser to select the images for import.
Z-Spacing Sets the slice distance in z. Per default, Auto is activated, and the dis-
tance is calculated automatically with information from the image
metadata of the first and last selected image.
Deactivate Auto to enter a value manually.
XY-Scaling Sets the scaling in x and y. Per default, Auto is activated, and the
scaling is automatically taken from the image metadata of the first se-
lected image.
Deactivate Auto to enter a value manually, e.g. if you import images
without scaling information in the metadata.
Sample Angle Sets the angle of the sample. Per default, Auto is activated, and the
sample angle is automatically calculated from the image metadata or
set to the default of 54 degree and the image is rendered with a 90
degree tilt (if no information is available in the metadata). In this case
the SmartFIB stack will be displayed perpendicular to the images ac-
quired on a light microscope.
Deactivate Auto to enter a value manually, e.g. if you know the origi-
nal sample angle as set in the SmartFIB software and import images
without using the information in the metadata.
Read XY Offsets Activated: Reads the xy offset of the individual slices from the meta-
data. Note that this can lead to a slanted z-stack depending on the
sample orientation during imaging and the metadata.
3.11.8.5.10 Median
This method allows you to reduce noise in an image. Each pixel is replaced by the median of its
neighbors. The size of the area of the neighboring pixels considered is defined by a quadratic filter
matrix. The modified pixel is the central pixel of the filter matrix. The median is the middle value
of the gray values of the pixel and its neighbors sorted in ascending order.
Parameter Description
Kernel Size Here you can adjust the size of the filter matrix. If the Show All mode
is activated, you can adjust the values in X, Y and Z direction individu-
ally.
3.11.8.5.11 Not
This function performs a binary "not" operation on all bits of the binary representation of an input
pixel's gray value. A 0-bit in the input pixel results in an 1-bit in the corresponding output pixel, an
1-bit in the input gets a 0-bit in the output. For integral image types, the resulting output gray
value is the difference of the maximum possible gray value minus the input gray value, but for
float image type the results are strange due to the inhomogeneous float format.
This function allows you to remove stripe artifacts from your image. It is recommended to use the
output preview to determine the suitable values for your parameters. You can display the output
preview window by clicking on Preview in the Output tool [} 984].
Parameter Description
Method Selects the method you want to use for the removing the striping arti-
facts in your image.
– GPU VSNR Only available if you have a NVIDIA GPU in your machine.
Uses a special GPU-based VSNR algorithm.
– CPU Uses a CPU optimized algorithm to remove artifacts from the image.
Iterations Sets the number of iterations after which the calculation is ended.
First Filter Displays the settings of the first filter to remove vertical or horizontal
stripes in your image.
– Noise Level Sets the intensity level of the stripes in your image.
– Width The calculation for the removal of stripes in the image is based on a
Gaussian curve. This parameter sets the width of the curve.
– Height The calculation for the removal of stripes in the image is based on a
Gaussian curve. This parameter sets the elongation of the curve.
Parameter Description
– Angle Sets the angle for stripes in your image. An angle of 0 corresponds to
vertical stripes. An angle of 90 (or -90) corresponds to horizontal
stripes.
Second Filter To remove a second set of stripes (e.g. if you have both vertical and
horizontal stripes in your image), you can activate a second filter. For
information about the parameters, see the descriptions above.
– Enabled Activated: Displays the settings for a second filter to remove striping
artifacts in the image.
Deactivated: Hides the settings for a second filter and only the first
filter is applied.
This method allows you to improve images in which the quality has been impaired by uneven illu-
mination or vignetting.
If you want to perform shading correction before an experiment (recommended), you have to use
the shading correction function in the Camera tool, in the Post-Processing section [} 882].
Parameter Description
Shading Mode Selects which shading correction should be applied.
- Channel Spe- For multichannel references, the information from each channel is ap-
cific plied individually to the input image. The number and type of chan-
nels must be the same between input image and shading reference
image. This is the correct setting when using a reference image which
was created by the Shading Reference for Processing function
(e.g. for Axioscan images).
- Tile Specific In case a shading reference has more than one tile, the tiles of the ref-
erence and the tiles of the image that should be corrected are
mapped to each other. The tiles must be identical in number and XY
position. This can be the case, for example, when using an Axiozoom
where shading is primarily determined by the sample mounting situa-
tion rather than the optical system.
Shading Reference Only available in Batch mode and if Global Shading is selected, or
Camera Shading is selected and Automatic is deactivated.
Enables you to select a reference image for the shading correction.
Display Mode
- Additive In this mode the normalized reference image is subtracted from each
camera frame. This influences the brightness of the image.
Parameter Description
- Multiplicative In this mode each camera frame is divided by the normalized refer-
ence image. This influences the contrast of the image. This is the de-
fault setting.
The simulated/auto reference image is created by averaging up to 20
camera frames in the input image and running a lowpass filter on
them.
Offset Adjust the gray value that will be added on to the newly calculated
gray values using the slider or input field. If this results in negative val-
ues, these are set to 0. Values that exceed the maximum gray value
are set to the maximum gray value.
See also
2 General Settings [} 83]
With this function you can replace one or more z-slices of a stack with the respective previous or
following slice.
Parameter Description
Replace with Next Replaces the currently selected z-slice with the following one.
Replace with pre- Replaces the currently selected z-slice with the previous one.
vious
Replacement Table Only visible if at least one replacement has been set.
Displays information about the slice replacement and which Z is re-
placed.
See also
2 Replacing Z-Slices in a Z-Stack [} 796]
This function allows you to select a folder with FIB-SEM tiffs and sort these files in separate sub-
folders according to channel name, number of pixels, image size, and spacing of the images, cor-
responding to slice thickness. The files are sorted into subfolders of the output folder according to
their metadata. This means that each time one of the above mentioned metadata changes from
one image to the subsequent image, a new sub folder is created. Thus, subfolders with homoge-
neous metadata with respect to xyz-scaling and image size are created.
Note that this function only works if the tiff files have their default names assigned by the micro-
scope (e.g. channel0_slice_0001.tiff or slice_0001.tiff)! All tiff files with other names are ignored
by the function.
Parameter Description
Input Folder Selects the folder where the tiff files which should be sorted are
saved. Click on to open a file browser and navigate to the respec-
tive folder. Note that this function does not include images from sub-
folders, all images that you want to be sorted have to be in this input
folder!
Output Folder Selects the folder where the subfolders with the sorted tiff files are
placed. Click on to open a file browser and navigate to the re-
spective folder. The function automatically creates subfolders in this
folder.
File Operation
– Copy Copies the images into the output folder. Your data is duplicated and
remains unchanged in the input folder.
– Move Moves the images from your input folder into the output directory.
– Hard Link Creates links in the output folder to each image file in the input
folder, i.e. the space occupied by the images on the hard disk remains
unchanged.
This function allows you to automatically align the individual planes of a z-stack image if they are
not positioned precisely above each other.
Parameter Description
Quality Selects the quality level that you want the function to work with. The
calculation of the alignment is based on a so-called image pyramid.
The higher the selected quality, the more levels of the image pyramid
are used to calculate the alignment and the more precise the align-
ment will be. However, the higher the selected quality is, the slower
the calculation of the alignment will get.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
Parameter Description
Registration Selects the method which is used to align the images.
Method
– Translation The neighboring sections of the z-stack are shifted in relation to each
other in the X and Y direction.
– Rotation The neighboring sections of the z-stack are rotated in relation to each
other.
– Translation + The neighboring sections of the z-stack are translated and rotated in
Rotation relation to each other.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Region Selects which parts of the image should be considered for the calcula-
tion of the transformation matrix.
- Rectangle Re- Allows you to draw a region of interest into the image. Only the im-
gion age information of this region is then considered for the calculation of
the transformation matrix for alignment. The resulting transformation
matrix will be applied to the full image.
After you have drawn a rectangle region into the image, you can see
and change its coordinates with the X/Y/W/H input fields.
Crop Output Not visible if the Third Dimension (if available) is set to a particular
dimension instead of 2D Slices.
Activated: The image is cropped and only keeps the section which is
covered by the entire dimension, i.e. by all z-slices. The output image
can be of different size than the input image.
Deactivated: The output image is not cropped and keeps the size of
the input image. The image borders might get filled with a default
pixel value.
Single Component Activated: Displays the image channels. Selects one of the image
channels which will be used to calculate the alignment transformation
matrix, which then is also applied to the other channel(s).
Deactivated: Calculates an alignment transformation matrix for each
individual channel.
See also
2 Aligning Z-Planes Automatically (Based on a ROI) [} 795]
This function equalizes the gray values throughout an entire image stack. It scales the intensities
of all slices to a common mean intensity value and standard deviation. The goal is a harmoniza-
tion of the gray value distribution within a z-stack. With this process, brighter image slices will get
darker and darker slices will get brighter.
3.11.8.6 Export/Import
Parameter Description
Selects the folder with txm images that should be converted to czi.
After the conversion, the images will again be stored in this folder.
Directory
Preview Only Displays a preview of the images in the folder, but does not convert
them.
This method allows you to export single images into various file types so that you can continue to
use them in other programs. Multidimensional images (e.g. multichannel, z-stack, tile images) are
exported as individual images.
Name Function
File Type Select the desired file type from the dropdown list:
§ JPEG File Interchange Format (JPEG)
§ Windows Bitmap (BMP)
§ Tagged Image File Format (TIFF)
§ Tiff Format (64 bit) (Big TIFF)
§ Portable Network Graphics (PNG)
§ JPEG XR (WDP)
§ DigitalSurf SUR (SUR)
Note that different options are available depending on the file type
you have selected.
Quality Only available for the file types JPEG and JPEG XR.
Enter the image quality using the slider or input field to influence the
size of the file. Although low values result in very small files, image
quality may be considerably reduced.
Name Function
Resize Adjust the image size in percent using the slider or input field. The re-
sizing uses an algorithm for point-sampling from the nearest pyramid
tile (the nearest pyramid tile which has higher resolution, so there is
only sub-sampling and no up-sampling). Online pyramid is sub-sam-
pling by a factor of 2, using a 5x5 Gaussian kernel, and offline pyra-
mid is sub-sampling by a factor of 3, using a binomial kernel.
Convert to 8 Bit Only available for the file types TIFF, BigTIFF, PNG and JPEG XR.
Activated: Converts a 16 bit gray level image into an 8 bit gray level
image, or a 48 bit color image into a 24 bit color image.
Compression Only available for the file type TIFF and BigTIFF.
Selects the compression method for reducing the data size.
- None Retains the data size of the original image. No compression is per-
formed.
Merge All Scenes Only available for the file type BigTIFF.
Activated: Generates one image including all scenes.
Deactivated: Generates single scene images.
Parameter Description
Original Data Activated: Exports the image with the original channel colors and the
original display characteristic curve.
– Shift Pixel Activated: The pixel are shifted to 16 bit before converting to 8 bit.
For example, a 14 bit image is first transformed to a 16 bit image,
which is then converted to an 8 bit image. The 14 bit range is
Parameter Description
mapped to the whole 8 bit range.
Deactivated: No shift takes place. A 14 bit image is treated as a 16
bit image and therefore the transformation to 8 bit covers only a re-
duced range.
Apply Display Activated: Exports the image with the changed channel color and
Curve and Channel display characteristic curve settings. These settings are applied to the
Color pixel values of the exported images. They are particularly important if
you want to use dark images with a dynamic range of more than 8
bits in other programs.
– Burn-in Anno- Activated: Burns the graphical elements into the image. The pixels
tations under the graphical element (e.g. scale bars) are overwritten. The
burnt-in graphical elements cannot be modified.
Short Format Selects which type of naming convention is used for the exported im-
age, see Naming Convention for Image Export [} 69].
Activated: Uses the short format naming convention. This is the de-
fault setting.
Deactivated: Uses the long format naming convention.
Parameter Description
Use Full Set of Di- Select this option if you want to export all dimensions without chang-
mensions ing them.
Define Subset Select this option if you only want to export individual dimensions or
subsets of individual dimensions.
Info
Each of the sections described below is only visible if the corresponding dimension is present in
the input image.
Parameter Description
Channels Here you can select which channels of the input image you want to
be used. All channels are selected by default. To deselect a channel,
click on the relevant channel button.
Z-Position, Time, Here you can select which parts of the input image you want to use
Block, Scene for the resulting image.
- Extract All If selected, all parts of the corresponding image are extracted.
- Extract Range If selected, you can select a certain range of images to be extracted.
- Extract Multi- If selected, you can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section 10, followed by sections 14 to 18, as well as
sections 20 and 23.
- Get current Adopts the position from the current display in the image area.
position
- Interval Activated: Interval mode is active. The Interval spin box/input field
appears.
Enter the desired interval here. E.g. if you enter the value 2 only every
2nd value from the range is considered.
Region Here you can select if you want to use the entire image or just a re-
gion (ROI) of the input image. Note that in combination with image
dimensions, the rectangle is only drawn into one and only this is con-
sidered for export. For example, in an image with multiple scenes, the
image export only considers the ROI that is always drawn into the cur-
rently displayed scene defined by the Dimensions view options tab.
Parameter Description
- Full If selected this option, the full image is used for the new image.
- Rectangle Re- If selected this option, you can draw in a rectangle region of interest
gion (ROI) which is used for creating a new image.
If a rectangle region is drawn in, you can see and change its coordi-
nates by editing the input fields for X, Y, W, and/or H.
- Export Se- Exports the tiles according to the selected original image.
lected Tiles
- Crop to Selec- Creates new tiles according to your entries in the fields Columns/
tion and Gen- Rows and Overlap.
erate New
Tiles
- Columns/Rows Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines, with how many rows and columns the re-tiling is performed.
- Overlap Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines the overlap of the re-tiled tiles in %.
Parameter Description
Export To Sets path to the folder where images are exported to.
Generate xml File Creates xml files, containing general information and metadata infor-
mation for the exported image(s).
– Long Format Activated: Selects the long format naming for the xml files, which in-
cludes the file type suffix in the xml titles as well as metadata for the
metadata xml, see Naming of xml Files for Exported Images [} 70].
– Short Format Activated: Selects the short format naming for the xml files, which
excludes the file type suffix from the xml titles as well as shortens
metadata to meta for the metadata xml, see Naming of xml Files for
Exported Images [} 70].
Prefix Adds the specified name as a prefix to the exported data. By default,
this is the name of the original image data. For information on nam-
ing conventions, see Naming Convention for Image Export [} 69].
Using the Image Import function you can create a multidimensional image (multichannel, Z-
stack, time lapse, tile, position image) from individual images. The individual images may be in any
of the external formats supported by ZEN (see below). The resulting image can then be saved in
CZI format and processed further using the functions available in ZEN.
Supported file types
§ JPG images
§ BMP
§ TIFF
§ PNG
§ GIF
§ DeltaVision images
§ MetaFluor images
§ Multi page images
Parameters
Parameter Description
Multichannel Activated: Activates the settings to import multichannel images.
Adjust the settings for the multichannel image to be imported in the
list below.
- Use Channel Activated: Uses the name specified in the Name column to identify
Name as Iden- the channel. The channel name will appear in the Preview display
tifier field in the Specify the Identifiers section.
Deactivated: Activates the Identifier, Start Index and Interval col-
umns in the Specify the Identifiers section.
- Interval Here you enter the value in µm for the distance between the individ-
ual slices. The total height of the Z-stack is calculated automatically
from this value and the number of slices.
- Slices Here you enter the number of slices of the Z-Stack image.
- Range Here you enter the total height (range) of the Z-stack in µm.
The distance between the individual Z-stacks is calculated automati-
cally from this value and the number of slices and displayed in the In-
terval display field.
- Extended Mi- Activates additional parameters that are necessary for further process-
croscope Pa- ing of the imported image (e.g. for deconvolution), like:
rameters
§ Magnification:
Select the objective magnification that was used for acquisition
here.
§ Aperture:
Parameter Description
Here you can enter the value of the numerical aperture of the ob-
jective that was used for acquisition.
§ Immersion:
Select the immersion medium that was used for acquisition here.
Time Series Activated: Activates the settings to import time series images.
- Interval Here you enter the value for the interval between the individual time
points. Select the unit of time from the dropdown list to the right of
the input field.
- Cycles Here you enter the cycles of the time series. The entered value will af-
fect the duration/the interval depending on which value you have se-
lected.
- Duration Here you can enter the value for the duration of the entire time series.
Select the unit of time from the dropdown list to the right of the input
field. Enter the number of time points in the Time Points input field.
- Columns Here you can enter the number of columns of the tile image.
- Overlap Here you can enter the percentage by which the tiles do overlap.
- Rows Here you can enter the number of rows of the tile image.
- Meander Select this option, if the images to be imported were acquired in the
Meander acquisition/travel mode.
- Comb Select this option if the images to be imported were acquired in the
Comb acquisition/travel mode.
- Use Current Uses the geometric scaling currently selected and displays the values
Scaling for Scale (X) and Scale (Y) with the corresponding unit in the rele-
vant display field.
- Define scaling Enter the desired values in the Scale (X) input field and in the Scale
(Y) spin box/input field. Select the unit for the scaling value from the
dropdown list to the right of each input field.
Automatic If selected, all images that are available in an import folder are im-
ported automatically.
Parameter Description
- Import From Here you can import images according to a File List or a Multi page
image.
Click on the folder button to the right of the display field. The
names of the images are displayed in the File Name list below the
display field.
Specify the Identi- Here you can enter and check all the settings you need to identify
fiers your images.
Parameter Description
Input Selects a txm file for import.
Using this function you can export multidimensional images (e.g. Time Series or Z-Stack images)
into various file types in the form of film sequences so that you can continue to use them in other
programs.
Info
If you want to export MOV files (H264 or MPEG4 codec) successfully, download the latest re-
lease version of application FFmpeg, Windows 64-bit, Static (e.g. on https://www.ffm-
peg.org/download.html). Copy ffmpeg.exe into the same folder where ZEN.exe is located
(for example C:\Program Files\Carl Zeiss\ZEN 2\ZEN 2 (blue edition)).
See also
2 Exporting Movies [} 67]
Info
AVI (MS-Video1) mode is available for 32-bit Windows operating systems only.
Parameter Description
Format Select the desired mode here. The following formats are available for
the movie export.
§ AVI (M-JPEG compression)
§ AVI (uncompressed)
§ AVI (DV)
§ WMF (WindowsMedia)
§ MOV (H.264)
§ MOV (MPEG4)
Parameter Description
§ AVI (MS-Video1)
- Original Size Not available for the file types AVI (DV) and AVI (MS-Video1).
Uses the height and width of the input image and sets the frame rate
to 5 frames per second.
- User-Defined Not available for the file types AVI (DV) and AVI (MS-Video1).
Enter the values in the Width, Height, and Frame Rate input fields.
- 720x576/25fps Uses the PAL (Phase Alternating Line) video resolution with 25 frames
(PAL 576p/25) per second.
- 720x480/29.97f Uses the NTSC (National Television Systems Committee) video resolu-
ps (NTSC) tion with 29.97 frames per second.
- 1280x720/50fp Not available for the file types AVI (DV) and AVI (MS-Video1).
s (HD 720p/50)
Uses the HD (High Definition 720) video resolution with 25 frames per
second.
- 1920x1080/25f Not available for the file types AVI (DV) and AVI (MS-Video1).
ps (HD 1080p/
Uses the HD (High Definition 1080) video resolution with 25 frames
25)
per second.
- 1920x1080/29. Not available for the file types AVI (DV) and AVI (MS-Video1).
97fps (HD
Uses the HD (High Definition 1080) video resolution with 29.97
1080p/29.97)
frames per second.
Width Only active if you have selected User-Defined in the Format drop-
down list.
Here you can enter the width of the image in pixels (px).
Height Only active if you have selected User-Defined in the Format drop-
down list.
Here you can enter the height of the image in pixels (px).
Frame Rate Only active if you have selected User Defined in the Format drop-
down list.
Here you can enter the frame rate in frames per second (fps).
Quality Only visible if you have selected AVI (M-JPEG compression) or MOV
in the Mode dropdown list.
Here you can set the image quality using the slider or spin box/input
field. This influences the size of the file. Although low values result in
very small files, image quality may be considerably reduced.
The following functions are only visible if the Show All mode is activated:
Parameter Description
Apply Display Set-
tings and Channel
Color
– Burn-in Anno- Activated: Burns the graphic elements into the image. The pixels un-
tations der the graphic element (e.g. scale bars) are overwritten. The burnt-in
graphic elements cannot be subsequently modified.
Use Full Set of Di- Activated: Includes all dimensions of the original file.
mensions
Define Subset Activated: Creates a subset of the data, depends on the dimensions
available in the image, e.g. Channels, Region, Tiles.
Info
At least one of the three checkboxes must be activated. If the Merged Channels Image and
Individual Channel Image checkboxes are activated, you can export the individual colored
images and the pseudo color images in a single step.
The following functions are only visible if the Show All mode is activated:
Parameter Description
Fitting Select the desired type of fitting here.
Parameter Description
- Fit All (Uni- Fits the image to the selected resolution. The original aspect ratio is
form) retained.
- Fit and Crop Fits the image to the selected resolution and clips it. The original as-
(Uniform to pect ratio is not retained.
Fill)
- Fit and Stretch Stretches the image to the selected resolution. The original aspect ra-
(Fill) tio is not retained.
- Crop (None) Crops the image to the selected resolution. The original aspect ratio is
retained.
The following functions are only visible if the Show All mode is activated:
Change the sequence of the dimensions in which you want the movie to be created.
button
Shifts the selected dimension up a line.
button
Shifts the selected dimension down a line.
Parameter Description
Mapping Select how you want the images to be assigned.
- Fixed Duration Enter the time per image in seconds using the spin box/input field.
The total length is displayed in the Final Movie Length display field.
Final Movie Length Indicates the total length of the resulting movie, depending on the se-
lected image sequence and the time.
Parameter Description
Use Full Set of Di- Select this option if you want to export all dimensions without chang-
mensions ing them.
Define Subset Select this option if you only want to export individual dimensions or
subsets of individual dimensions.
Info
Each of the sections described below is only visible if the corresponding dimension is present in
the input image.
Parameter Description
Channels Here you can select which channels of the input image you want to
be used. All channels are selected by default. To deselect a channel,
click on the relevant channel button.
Z-Position, Time, Here you can select which parts of the input image you want to use
Block, Scene for the resulting image.
- Extract All If selected, all parts of the corresponding image are extracted.
- Extract Range If selected, you can select a certain range of images to be extracted.
- Extract Multi- If selected, you can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section 10, followed by sections 14 to 18, as well as
sections 20 and 23.
- Get current Adopts the position from the current display in the image area.
position
- Interval Activated: Interval mode is active. The Interval spin box/input field
appears.
Enter the desired interval here. E.g. if you enter the value 2 only every
2nd value from the range is considered.
Region Here you can select if you want to use the entire image or just a re-
gion (ROI) of the input image. Note that in combination with image
dimensions, the rectangle is only drawn into one and only this is con-
sidered for export. For example, in an image with multiple scenes, the
image export only considers the ROI that is always drawn into the cur-
rently displayed scene defined by the Dimensions view options tab.
- Full If selected this option, the full image is used for the new image.
- Rectangle Re- If selected this option, you can draw in a rectangle region of interest
gion (ROI) which is used for creating a new image.
If a rectangle region is drawn in, you can see and change its coordi-
nates by editing the input fields for X, Y, W, and/or H.
- Export Se- Exports the tiles according to the selected original image.
lected Tiles
Parameter Description
- Crop to Selec- Creates new tiles according to your entries in the fields Columns/
tion and Gen- Rows and Overlap.
erate New
Tiles
- Columns/Rows Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines, with how many rows and columns the re-tiling is performed.
- Overlap Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines the overlap of the re-tiled tiles in %.
Parameter Description
Export to The path of the export folder is displayed automatically in the display
field.
To change the file path, click on the button to the right of the
display field.
Parameter Description
Prefix Here you can edit the prefix specified or enter a new name. The name
of the original image is specified by default.
Using the OME TIFF Export function you can export your images into OME (Open Microscopy
Environment) TIFF format so that you can continue to use them in other programs. The images
are then available as a multipage TIFF file.
Parameter Description
Resize Adjust the image size in percent using the slider or spin box/input
field.
BigTIFF Activated: Creates a BigTIFF image that can be bigger than 4 giga-
bytes and uses 64-bit offset format.
Shift Pixel Activated: Shifts the grey value of a 10-bit or 12-bit image to 16-bit.
Parameter Description
Merge All Scenes Activated: Generates one image including all scenes. Single scene im-
ages will be generated, if the checkbox is deactivated.
The following functions are only visible if the Show All mode is activated:
Parameter Description
Original Data Activated: Exports the image with the original channel colors and the
original display characteristic curve.
– Convert to 8 Activated: The pixels are shifted to 16 bit before converting to 8 bit.
Bit For example, a 14 bit image is first transformed to a 16 bit image
which is then converted to an 8 bit image. The 14 bit range is
mapped to the whole 8 bit range.
Deactivated: No shift takes place. A 14 bit image is treated as a 16
bit image and therefore the transformation to 8 bit covers only a re-
duced range.
Apply Display Activated: Exports the image with the changed channel color and
Curve and Channel display characteristic curve settings. These settings are applied to the
Color pixel values of the exported images. They are particularly important if
you want to use dark images with a dynamic range of more than 8
bits in other programs.
– Burn-in Anno- Activated: Burns the graphical elements into the image. The pixels
tations under the graphical element (e.g. scale bars) are overwritten. The
burnt-in graphical elements cannot be modified.
Parameter Description
Use Full Set of Di- Select this option if you want to export all dimensions without chang-
mensions ing them.
Define Subset Select this option if you only want to export individual dimensions or
subsets of individual dimensions.
Info
Each of the sections described below is only visible if the corresponding dimension is present in
the input image.
Parameter Description
Channels Here you can select which channels of the input image you want to
be used. All channels are selected by default. To deselect a channel,
click on the relevant channel button.
Z-Position, Time, Here you can select which parts of the input image you want to use
Block, Scene for the resulting image.
- Extract All If selected, all parts of the corresponding image are extracted.
- Extract Range If selected, you can select a certain range of images to be extracted.
- Extract Multi- If selected, you can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section 10, followed by sections 14 to 18, as well as
sections 20 and 23.
- Get current Adopts the position from the current display in the image area.
position
- Interval Activated: Interval mode is active. The Interval spin box/input field
appears.
Enter the desired interval here. E.g. if you enter the value 2 only every
2nd value from the range is considered.
Region Here you can select if you want to use the entire image or just a re-
gion (ROI) of the input image. Note that in combination with image
dimensions, the rectangle is only drawn into one and only this is con-
sidered for export. For example, in an image with multiple scenes, the
image export only considers the ROI that is always drawn into the cur-
rently displayed scene defined by the Dimensions view options tab.
Parameter Description
- Full If selected this option, the full image is used for the new image.
- Rectangle Re- If selected this option, you can draw in a rectangle region of interest
gion (ROI) which is used for creating a new image.
If a rectangle region is drawn in, you can see and change its coordi-
nates by editing the input fields for X, Y, W, and/or H.
- Export Se- Exports the tiles according to the selected original image.
lected Tiles
- Crop to Selec- Creates new tiles according to your entries in the fields Columns/
tion and Gen- Rows and Overlap.
erate New
Tiles
- Columns/Rows Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines, with how many rows and columns the re-tiling is performed.
- Overlap Only available if Crop to Selection and Generate New Tiles is acti-
vated.
Defines the overlap of the re-tiled tiles in %.
Parameter Description
Export to The path of the export folder is displayed automatically in the display
field.
To change the file path, click on the button to the right of the
display field.
Parameter Description
Prefix Here you can edit the prefix specified or enter a new name. The name
of the original image is specified by default.
Using the ZVI Export function, you can export your images into ZVI format so that you can con-
tinue to use them in AxioVision.
Parameter Description
Export to The path of the export folder is displayed automatically in the display
field.
To change the file path, click on the button to the right of the
display field.
Parameter Description
Prefix Here you can edit the prefix specified or enter a new name. The name
of the original image is specified by default.
3.11.8.7 Geometric
With this function you can easily change the image orientation.
Parameters
Parameter Description
Orientation
Info
No Alignment of Mixed Mode Images
The alignment does not work for mixed mode images, i.e. you cannot align a widefield chan-
nel to an LSM channel.
This method allows you to automatically align the individual channels of a multi-channel image
correctly to one another.
Parameter Description
Registration Only visible if Load Transformation is deactivated.
Method Selects the method or a combination of methods which is used to
align the images.
- Translation The neighboring sections of the image are shifted in relation to each
other in the X and Y direction.
Parameter Description
- Rotation The neighboring sections of the image are rotated in relation to each
other.
- Skew Scaling The neighboring sections of the image are corrected for skewness/
shearing.
- Affine The neighboring sections of the image are shifted in X and Y direc-
tion, rotated and the magnification is adjusted from section to sec-
tion.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Crop Output Only visible if your image does not contain multiple tiles or scenes.
Activated: The image is cropped and only keeps the section which is
covered by the entire dimension (e.g. by all z-slices or time points).
The output image can be of different size than the input image.
Deactivated: The output image is not cropped and keeps the size of
the input image. The image borders might get filled with a default
pixel value.
Reference Channel Selects the channel that serves as reference for the alignment.
Parameter Description
Target Channel Selects the channel that is targeted for alignment.
Single Component Only visible if you have selected an input image which contains di-
mensional components (e.g. scenes, z-planes, time points) and if this
component is not selected as Third Dimension.
Activated: Displays a control to select the dimensional component
that is used to calculate the alignment transformation matrix.
Deactivated: Calculates an alignment transformation matrix for each
individual component (e.g. for each individual scene).
Single Tile Only visible for images with multiple tiles and if Single Component is
activated.
Activated: Displays the Tile Component control.
– Tile Compo- Selects the tile component that is used to calculate the alignment
nent transformation matrix.
See also
2 General Settings [} 83]
Info
No Alignment of Mixed Mode Images
The alignment does not work for mixed mode images, i.e. you cannot align a widefield chan-
nel to an LSM channel.
This method allows you to automatically align the individual channels of a multi-channel image
correctly to one another.
Parameter Description
Load Transforma- Activated: Allows you to load the result of a previous transformation.
tion
– Transforma- Allows you to select the respective *.xml file with the previously used
tion file transformation.
Parameter Description
Save Transforma- Only visible if Load Transformation is deactivated.
tion Activated: Saves the result of the transformation process in an *.xml
file for later use.
– Sobel Uses the Sobel method for preprocessing, see Sobel [} 473].
– Laplace Uses the Laplace method for preprocessing, see Laplace [} 472].
– Roberts Uses the Roberts method for preprocessing, see Roberts [} 473].
– Enhance Con- Uses the Enhance Contour method for preprocessing, see Enhance
tour Contour [} 174].
– Maximum Uses the Maximum Gradient method for preprocessing, see Gradient
Gradient Max [} 472].
- Translation The neighboring sections of the image are shifted in relation to each
other in the X and Y direction.
- Rotation The neighboring sections of the image are rotated in relation to each
other.
- Skew Scaling The neighboring sections of the image are corrected for skewness/
shearing.
- Affine The neighboring sections of the image are shifted in X and Y direc-
tion, rotated and the magnification is adjusted from section to sec-
tion.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
Parameter Description
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Single Component Only visible if you have selected an input image which contains di-
mensional components (e.g. scenes, z-planes, time points) and if this
component is not selected as Third Dimension.
Activated: Displays a control to select the dimensional component
that is used to calculate the alignment transformation matrix.
Deactivated: Calculates an alignment transformation matrix for each
individual component (e.g. for each individual scene).
Single Tile Only visible for images with multiple tiles and if Single Component is
activated.
Activated: Displays the Tile Component control.
– Tile Compo- Selects the tile component that is used to calculate the alignment
nent transformation matrix.
See also
2 General Settings [} 83]
The Color-coded projection function generates a maximum intensity projection image along the
z-, or time-dimension of a multidimensional data set. Instead of using the colors assigned to the
channels, it displays the position in z or in time with a color gradient.
Parameters
Parameter Description
Palette Sets the color-pattern for the projection.
ROI Specifies the range of the dimension (chosen below) which will be
used for the projection.
Dimension Selects whether to perform the projection along the time (T) or along
the z-axis (Z). Only dimensions which are available in your data set are
shown.
With this function you can align (or overlay) two images that are displaced in relation to each
other. It is also possible to make the individual planes of a z-stack image congruent, in the event
that they are not lying exactly on top of one another.
You can define 3 related points (Input Pixel) in both the input image and in the reference image
(Reference Pixel) that is displaced in relation to it. Therefore, click interactively on conspicuous
points, which are present in both images. If you click Apply, the function calculates the Output
image, in which the new fitting points have the same coordinates as in the Input image.
Parameters
Parameter Description
Input Pixel 1 - 3 If you click on the corresponding buttons, you can define the 3 input
pixel points.
The selected point is shown in the graphics plane. This serves as an
aid to orientation when you are clicking on the reference points.
Reference Pixel 1 - If you click on the corresponding buttons, you can define the 3 refer-
3 ence pixel points.
Interpolation Here you can specify how the rotation influences the neighboring pix-
els.
Parameter Description
- Linear The rotated pixel is given the gray value calculated from the linear
combination of the gray values of the pixel closest to it and this pixel’s
nearest neighbor.
- Cubic The rotated pixel is given the gray value resulting from a polynomial
function of the pixel that is closest to it.
3.11.8.7.6 Mirror
This method allows you to flip an image horizontally or vertically. In the case of multidimensional
images, such as Z-stack or time lapse images, you can also use the mirror method to reverse the
sequence of the relevant dimension.
Parameters
Parameter Description
Display Mode
See also
2 Image Processing Workflow [} 72]
With this method you can extract specific parts of the image of three-dimensional images. This is
accomplished with three alternative projection planes, frontal in the XY direction, sagittal in YZ di-
rection or transverse in XZ direction as seen from the observer of the image. You can choose be-
tween different projection methods. All methods have in common is that the pixels are analyzed
by the observer along an imaginary projection beam. You can also determine the thickness of the
projection planes, and thus the projection depth.
Parameters
Parameter Description
Projection Plane Here you choose the type of the projection plane (Frontal (X/Y), Trans-
verse (X/Z), Sagittal (Y/Z)).
Method
- Average Calculates the average of all pixels along the projection beam.
Parameter Description
- Weighted av- This method is related to the calculation of the extended depth of fo-
erage cus. It prefers structures with more lateral sharpness along the projec-
tion beam. The output image contains more significant details.
- Standard devi- Calculates the standard deviation of pixel grey values along the pro-
ation jection beam.
Start position Here you adjust the starting position of the project plane (in pixel
units or z-stack positions depending on the chosen projection plane).
The maximum range results automatically of the size of the input im-
age.
Thickness Here you adjust the thickness of the cutting plane (in pixel or z-stacks
depending on the chosen projection plane). The maximum range re-
sults automatically of the size of the input image.
3.11.8.7.8 Resample
This method allows you to change the size of an image in every dimension. You can either enlarge
or reduce the image size.
Parameters
Parameter Description
Third Dimension Only visible, if there is a third dimension in the input image and/or
Show all mode is activated.
Here you can select how you want the function to work in the case of
multidimensional images.
Adapt sizes Activated: The size of the output image is adjusted in accordance
with the settings for the scaling.
Deactivated: The output image has the same size as the input image.
Depending on the image size and rotation angle, partial areas of the
input image may not be visible in the output image.
Adjust per Channel Only visible if your input image is a multi-channel image.
Activated: You can adjust the parameters for each channel individu-
ally.
Interpolation Here you can select how you want interpolation to be performed if a
pixel is calculated from several individual pixels.
- Nearest Neigh- The output pixel is given the gray value of the input pixel that is clos-
bor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
Parameter Description
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Controls
Parameter Description
Scaling in X Adjust the desired scaling for X using the slider or input field.
Scaling in Y Adjust the desired scaling for Y using the slider or input field.
Scaling in Z Adjust the desired scaling for Z using the slider or input field.
The following parameters are only visible if the Adapt sizes checkbox is deactivated:
Shift in X Enter the shift in the X direction using the slider or input field.
Shift in Y Enter the shift in the Y direction using the slider or input field.
Shift in Z Enter the shift in the Z direction using the slider or input field.
3.11.8.7.9 Rotate
With this method you can rotate images by defined angles. This function was especially devel-
oped for rotating complex (multi-dimensional) images in the available image dimensions. There-
fore, the function can be a little bit slower but offers more settings for the rotation. For simple, 2-
dimensional rotations we recommend to use the Rotate 2D function which is usually a lot faster.
Parameter
Parameter Description
Third Dimension Only visible, if there is a third dimension in the input image and/or
Show all mode is activated.
Here you can select how you want the function to work in the case of
multidimensional images.
Adapt sizes Activated: The size of the output image is adjusted in accordance
with the settings for the scaling.
Deactivated: The output image has the same size as the input image.
Depending on the image size and rotation angle, partial areas of the
input image may not be visible in the output image.
Adjust per Channel Only visible if your input image is a multi-channel image.
Activated: You can adjust the parameters for each channel individu-
ally.
Interpolation Here you can select how you want interpolation to be performed if a
pixel is calculated from several individual pixels.
Parameter Description
- Nearest Neigh- The output pixel is given the gray value of the input pixel that is clos-
bor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Parameter Description
Angle Enter the angle by which you want the input image to be rotated
around using the slider or input field. Positive angles rotate the im-
ages clockwise.
Angle X Enter the angle by which you want the input image to be rotated on
the X axis using the slider or input field.
Angle Y Enter the angle by which you want the input image to be rotated on
the Y axis using the slider or input field.
Angle Z Enter the angle by which you want the input image to be rotated on
the Z axis using the slider or input field.
The following parameters are only visible if the Adapt sizes checkbox is deactivated:
Center X Enter the X coordinate of the center of the rotation using the slider or
spin box/input field.
The value 0 means that the image is rotated around its center point.
Negative values mean that the center of the rotation in the image is
shifted to the left in relation to the image's center point. Positive val-
ues shift the center to the right.
Center Y Enter the Y coordinate of the center of the rotation using the slider or
spin box/input field.
The value 0 means that the image is rotated around its center point.
Negative values mean that the center of the rotation in the image is
shifted downwards in relation to the image's center point. Positive
values shift the center upwards.
Center Z Enter the Z coordinate of the center of the rotation using the slider or
spin box/input field.
The value 0 means that the image is rotated around its center point.
Negative values mean that the center of the rotation in the image is
shifted forwards in relation to the image's center point. Positive val-
ues shift the center backwards.
3.11.8.7.10 Rotate 2D
With this method, you can easily rotate an image clockwise around its center axis. Simply set the
desired angle with the slider. Of course, you can enter the angle value in the input field directly.
To perform the rotation, click on the Apply button on top of the Processing tab.
Parameter Description
Angle Enter the angle by which you want the input image to be rotated
around using the slider or input field. Positive angles rotate the im-
ages clockwise.
Interpolation
- Nearest Neigh- The output pixel is given the gray value of the input pixel that is clos-
bor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
PaveWholePlane Activated: Paves the whole resulting plane with tiles; there might be
empty tiles.
Inactivated: Creates only tiles which contain also parts of the image.
IncludeGraphic El- Activated: Rotates also the graphical elements together with the im-
ements age.
Inactivated: Rotates only the image, but not the graphical elements.
With this function, you can reduce the size of an image in a flexible way. The reduction is per-
formed along with an averaging of the respective dimension. If the parameters are set to 1, the
corresponding dimension is not modified.
Parameter Description
Average Pixels X Adjusts how many pixels are averaged in the lateral X dimension to
calculate the output image. The size of the output will be smaller by
this factor.
Average Pixels Y Adjusts how many pixels are averaged in the lateral Y dimension to
calculate the output image. The size of the output will be smaller by
this factor.
Average Pix. 3rd Only visible, if a third dimension other than 2D Slices is selected be-
Dimension low.
Adjusts how many pixels are averaged in the lateral third dimension
selected below. The size of the output will be smaller by this factor.
Third Dimension If your input image has a third dimension you can select it here for re-
sampling.
§ Z (sections of a Z-Stack)
§ C (Channels)
Parameter Description
§ If 2D Slices is selected, the third dimension will not be re-sampled.
3.11.8.7.12 Shift
This method allows you to shift the content of an image in the direction of the 3 axes X, Y and Z.
To adjust the shift, use the respective sliders or input fields under Parameters.
Parameter Description
Third Dimension Only available if Show All is activated and if the acquired image is
three dimensional.
Selects the third dimensional shift.
– 2D Slices The third dimension is shifted by 0 which means the image is only
shifted in X/Y direction.
Adjust per Channel Only available for images with multiple channels.
Opens a list with the channels to allow an individual adjustment of
each channel.
– Skip Channel Skips this channel when processing. This channel will not be in the
output image.
– Copy Channel This channel is copied into the output image without a shift.
3.11.8.7.13 Shift Z
Parameter Description
Shift in Z Selects the shift of the content in z direction.
See also
2 General Settings [} 83]
The images are registered by using the stage calibration points from the meta data.
Parameter Description
Interpolation Here you can select how you want interpolation to be performed.
- Nearest The output pixel is given the value of the input pixel that is closest to
Neighbor it.
- Linear The output pixel is given the value resulting from the linear combina-
tion of the input pixels closest to it.
- Cubic The output pixel is given the value resulting from a polynomial func-
tion of the input pixels closest to it.
3.11.8.7.15 Stitching
This method allows you to automatically align the individual tiles of a tile image with one another.
When acquiring a tile image, the stage movement is not precise down to the pixel level of the
camera sensor. To bypass this technical limitation and to have a margin to compensate for this in-
accuracy, tiles are usually overlapped by a few percent. To align them, the overlaps between
neighboring tiles are analyzed. The tiles are then shifted and rotated against each other to make
the transitions between them as seamless as possible.
Info
Direct Processing
When you use Stitching in Direct Processing, not all the parameters are available (e.g. Fuse
Tiles or Correct Shading). Some functionality is not available because there is no image avail-
able yet during the setup of Direct Processing. This means that taking reference settings from
the 2D view or processing dimensions Reference only is not possible. For experiments con-
taining multiple scenes, each scene is thus processed separately. Direct Processing also always
keeps both the acquired and processed image, so an inplace stitching is not available.
Parameter Description
Inplace The stitching is directly applied to the original image.
New Output A new image is generated as a result of the stitching process. The
original image is not modified.
Parameter Description
- Automatic The function automatically calculates a reference image from the in-
put image.
- Reference The function uses an existing reference image. This must be selected
in addition to the input image in the Input tool of the image parame-
ters section.
Select Reference Only visible in Batch mode and if Reference is selected as shading
Image correction.
Selects a reference image for the shading correction.
- Get all dimen- Selects the reference dimensions based on the current planes from
sions from 2d
the 2D view by clicking .
view
- All by refer- Uses only the reference image plane for calculating the stitching of
ence this dimension. All other planes are stitched accordingly and appear in
the output image.
- Reference Uses only the selected reference image plane of this dimension. No
only other planes appear in the output image, e.g. the result of stitching a
z-stack with Reference only would be a normal 2D image.
- All individually All planes of this dimension are stitched individually and appear in the
output image.
- Channels
Selects the channel that serves as reference. Clicking takes the
current channels from the 2D view.
- Z-Position
Selects the z-position that serves as reference. Clicking takes the
current z-position from the 2D view.
- Time
Selects the time point that serves as reference. Clicking takes the
current time point from the 2D view.
- Scene
Selects the scene that serves as reference. Clicking takes the cur-
rent scene from the 2D view.
Edge Detector
Parameter Description
Minimal Overlap Sets the minimal amount of overlap between neighboring tiles that
should be evaluated by the stitching function. The overlap is sets as %
of the single tile dimensions (height and width).
Maximal Shift Specifies the maximal extent of shift that can be applied to an individ-
ual tile during stitching. The shift is sets as % of the single tile dimen-
sions (height and width).
Comparer Selects a method for comparing the tile overlaps to find similarities for
alignment.
- Basic Only the overlaps (except diagonal overlaps) with the strongest con-
trasts in each tile are taken into account for stitching. The contrast is
determined with a Sobel filter.
- Best All overlaps (except diagonal overlaps) of a tile are taken into account
for stitching.
See also
2 Using Direct Processing [} 230]
2 Shading Correction [} 116]
This method allows you to bring the individual planes of a z-stack image into line if these are not
positioned precisely one above the other. This is the case, for example, when you acquire z-stacks
at an oblique angle using a stereo microscope.
Parameter Description
Registration Only visible if Load Transformation is deactivated.
Method Selects the method or a combination of methods which is used to
align the images.
- Translation The neighboring sections of the image are shifted in relation to each
other in the X and Y direction.
- Rotation The neighboring sections of the image are rotated in relation to each
other.
Parameter Description
- Iso Scaling The magnification is adjusted from section to section.
- Skew Scaling The neighboring sections of the image are corrected for skewness/
shearing.
- Affine The neighboring sections of the image are shifted in X and Y direc-
tion, rotated and the magnification is adjusted from section to sec-
tion.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Crop Output Only visible if your image does not contain multiple tiles or scenes.
Activated: The image is cropped and only keeps the section which is
covered by the entire dimension (e.g. by all z-slices or time points).
The output image can be of different size than the input image.
Deactivated: The output image is not cropped and keeps the size of
the input image. The image borders might get filled with a default
pixel value.
Single Component Only visible if you have selected an input image which contains di-
mensional components (e.g. scenes, z-planes, time points) and if this
component is not selected as Third Dimension.
Parameter Description
Activated: Displays a control to select the dimensional component
that is used to calculate the alignment transformation matrix.
Deactivated: Calculates an alignment transformation matrix for each
individual component (e.g. for each individual scene).
Single Tile Only visible for images with multiple tiles and if Single Component is
activated.
Activated: Displays the Tile Component control.
– Tile Compo- Selects the tile component that is used to calculate the alignment
nent transformation matrix.
See also
2 General Settings [} 83]
This function allows you to automatically align the individual planes of a z-stack image if they are
not positioned precisely above each other.
Parameter Description
Quality Selects the quality level that you want the function to work with. The
calculation of the alignment is based on a so-called image pyramid.
The higher the selected quality, the more levels of the image pyramid
are used to calculate the alignment and the more precise the align-
ment will be. However, the higher the selected quality is, the slower
the calculation of the alignment will get.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
Parameter Description
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
– Translation The neighboring sections of the z-stack are shifted in relation to each
other in the X and Y direction.
– Rotation The neighboring sections of the z-stack are rotated in relation to each
other.
– Translation + The neighboring sections of the z-stack are translated and rotated in
Rotation relation to each other.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Region Selects which parts of the image should be considered for the calcula-
tion of the transformation matrix.
- Rectangle Re- Allows you to draw a region of interest into the image. Only the im-
gion age information of this region is then considered for the calculation of
the transformation matrix for alignment. The resulting transformation
matrix will be applied to the full image.
After you have drawn a rectangle region into the image, you can see
and change its coordinates with the X/Y/W/H input fields.
Crop Output Not visible if the Third Dimension (if available) is set to a particular
dimension instead of 2D Slices.
Activated: The image is cropped and only keeps the section which is
covered by the entire dimension, i.e. by all z-slices. The output image
can be of different size than the input image.
Deactivated: The output image is not cropped and keeps the size of
the input image. The image borders might get filled with a default
pixel value.
Single Component Activated: Displays the image channels. Selects one of the image
channels which will be used to calculate the alignment transformation
matrix, which then is also applied to the other channel(s).
Deactivated: Calculates an alignment transformation matrix for each
individual channel.
See also
2 Aligning Z-Planes Automatically (Based on a ROI) [} 795]
The group Image Analysis provides different options to analyze images in single or batch mode.
The measurement data can be embedded in the image, saved in a *csv-datalist, or saved as label
image.
3.11.8.8.1 Analyze
This function is applicable only for one image: This must be selected in the Input tool. The mea-
surement data are embedded in the image.
This function is applicable only for one image: This must be selected in the Input tool. The mea-
surement data is stored in a *csv list and is not embedded into the image.
The following file types are supported:
§ CZI
§ ZVI
§ BMP
§ TIF
§ JPG
Parameter Description
Data Folder Folder where the *csv-data lists will be stored.
Parameter Description
Image Folder Folder of the *czi images to be measured.
This function allows an analysis of all images in a folder. The measured data are stored in a *csv
list and not embedded into the image.
The following file types are supported:
§ CZI
§ ZVI
§ BMP
§ TIF
§ JPG
Parameter Description
Image Folder Folder of the images to be measured.
Append Data Activated: For each class, an accumulated csv-data list is stored with
the measurement data of all images.
Deactivated: For each image, one data list is stored per class.
The image processing functions Analyze to Label Image labels images based on an existing im-
age analysis setting and the parameters which you select in the Parameters section.
Parameter Description
Setting Displays image analysis settings in the drop-down list.
– One channel Creates a one channel output image and copies the content of the
and copy first first channel.
– One channel Creates an output image with one channel per region class.
per class
– One channel Creates an output image with one channel per region class and copies
per class and the content of the original channels of each class.
copy
– All referenced Creates all channels referenced in the image analysis setting.
channels
– All referenced Creates all channels referenced in the image analysis setting and
channels and copies the content of the original image.
copy
– Copy all Chan- Creates an output image containing each channel of the input image
nels and Label and overlays each channel the masks of all classes defined in the im-
all age analysis setting.
– Region - Pixel Labels the region with the maximum value of the pixel type.
Type Maxi-
mum
– Region - Label Labels the region with the value specified by the Label Value slider/
Value input field.
Parameter Description
– Region - Re- Labels the region with the region class ID.
gion Class ID
– Region - Re- Labels the region with the region class color.
gion Class
Color
– Contour - Draws a contour with the maximum value of the pixel type.
Pixel Type
Maximum
– Contour - La- Draws a contour with the value specified by the Label Value slider/
bel Value input field.
– Contour - Re- Draws a contour with the color defined by the region ID.
gion ID
– Contour - Re- Draws a contour with the color defined by the region ID color.
gion ID Color
– Contour - Re- Draws a contour with the color defined by the region class ID.
gion Class ID
– Contour - Re- Draws a contour with the color defined by the region class color.
gion Class
Color
– Contour2 - Draws a double contour with the value defined by the maximum
Pixel Type value of the pixel type.
Maximum
– Contour2 - La- Draws a double contour with the value specified by the Label Value
bel Value slider/ input field.
– Contour2 - Re- Draws a double contour with the color defined by the region ID.
gion ID
– Contour2 - Re- Draws a double contour with the color defined by the region ID color.
gion ID Color
– Contour2 - Re- Draws a double contour with the color defined by the region class ID.
gion Class ID
– Contour2 - Re- Draws a double contour with the color defined by the region class
gion Class color.
Color
Pixel Type Selects the pixel type for the output image.
See also
2 Using Analyze to Label Image [} 81]
The image processing functions Analyze to Image Batch labels images based on an existing im-
age analysis setting and the parameters which you select in the Parameters section.
Parameter Description
Setting Displays image analysis settings in the drop-down list.
– One channel Creates a one channel output image and copies the content of the
and copy first first channel.
– One channel Creates an output image with one channel per region class.
per class
– One channel Creates an output image with one channel per region class and copies
per class and the content of the original channels of each class.
copy
– All referenced Creates all channels referenced in the image analysis setting.
channels
– All referenced Creates all channels referenced in the image analysis setting and
channels and copies the content of the original image.
copy
– Copy all Chan- Creates an output image containing each channel of the input image
nels and Label and overlays each channel the masks of all classes defined in the im-
all age analysis setting.
– Region - Pixel Labels the region with the maximum value of the pixel type.
Type Maxi-
mum
– Region - Label Labels the region with the value specified by the Label Value slider/
Value input field.
– Region - Re- Labels the region with the region class ID.
gion Class ID
– Region - Re- Labels the region with the region class color.
gion Class
Color
Parameter Description
– Contour - Draws a contour with the maximum value of the pixel type.
Pixel Type
Maximum
– Contour - La- Draws a contour with the value specified by the Label Value slider/
bel Value input field.
– Contour - Re- Draws a contour with the color defined by the region ID.
gion ID
– Contour - Re- Draws a contour with the color defined by the region ID color.
gion ID Color
– Contour - Re- Draws a contour with the color defined by the region class ID.
gion Class ID
– Contour - Re- Draws a contour with the color defined by the region class color.
gion Class
Color
– Contour2 - Draws a double contour with the value defined by the maximum
Pixel Type value of the pixel type.
Maximum
– Contour2 - La- Draws a double contour with the value specified by the Label Value
bel Value slider/ input field.
– Contour2 - Re- Draws a double contour with the color defined by the region ID.
gion ID
– Contour2 - Re- Draws a double contour with the color defined by the region ID color.
gion ID Color
– Contour2 - Re- Draws a double contour with the color defined by the region class ID.
gion Class ID
– Contour2 - Re- Draws a double contour with the color defined by the region class
gion Class color.
Color
Pixel Type Selects the pixel type for the output image.
Format Selects the format of the output image. You can save your image(s)
as:
§ czi
§ png
Parameter Description
§ czi and png
See also
2 Using Analyze to Label Image [} 81]
This function allows the interactive analysis of all images in a folder. If steps of the image analysis
are marked as interactive in the image analysis setting, this function opens the Image Analysis
Wizard [} 941] with these steps. You can then adjust your analysis for each of the images. If you
deselect one of the interactive steps during the process, this step will not be shown for the analy-
sis of the following images. The image analysis setting remains unchanged by the changes made
during Analyze Interactive Batch.
The measured data are embedded in each original image.
Parameter Description
Image Folder Folder of the *czi images to be measured.
3.11.8.9 Lightsheet
This group of processing functions allows you to analyze multiview dual side illuminated Light-
sheet images. The following functions are available:
This function merges the image done with the lightsheet from the right side with the image of the
left side. It can be used for the following datasets acquired with Dual Side illumination:
Parameter Description
Settings Allows handling predefined settings (templates). See also General Set-
tings [} 83].
Parameter Description
– Rename Renames the setting.
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
Mean The average intensity for each pixel is calculated using the values of
both illumination sides of the fusion.
– Current View Displays the selected view of a Multiview image in the 2D tab which
is used for fusion subset in x.
Note: The position of the sliders Left and Right as well as the Blend-
ing value have to be defined for each view individually.
– Left Selects the area of the left illumination side which is used for fusion.
The spin box shows the cut-off x pixel value. The cutoff is shown in
the 2D view with a red line.
Note: It is best to set the illumination slider of the Dimensions view
control to 1 to see the left-side illuminated image.
– Right Selects the area of the right illumination side which is used for fusion.
The spin box shows the cut-off pixel value of the y axis. The cutoff is
shown in the 2D view with a blue line.
Note: It is best to set the illumination slider of the Dimensions view
control to 2 to see the right-side illuminated image.
– Blending Defines the number of pixels used within an existing overlap between
left and right illumination to crossfade both illuminations sides.
Note: To avoid a visible change in intensity along the two cut off lines
for Mean Fusion with Fusion Subset in X, an overlap of both illumi-
nation sides should remain in the center of the image.
Parameter Description
Settings Allows handling predefined settings (templates). See also General Set-
tings [} 83].
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
View to channel Converts views from a Multiview data set into channels.
Illumination to Converts illuminations from a Dual Side data set into channels.
channel
Lightsheet processing allows you to deconvolve, register, and fuse Multiview images.
Parameter Description
Settings Allows handling predefined settings (templates). See also General Set-
tings [} 83].
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
Parameter Description
– Export Exports the current setting.
Parameter Description
Parameter Settings Setting of alignment parameters.
Parameter Description
– Registration Selects the registration channel. Use a channel with fiducials to which
Channel other channels will be registered to.
Note: A selection of more than one channel for the registration
process is possible. For each selected channel an individual registra-
tion will be performed and, therefore, it is advisable to use one chan-
nel for the final registration. To test which channel will lead to the
best results, all channels can be selected for later comparison of the
results.
– Dual side fu- This operation merges the image done with the light sheet from the
sion right side with the image of the left side. See also Dual Side Fusion
[} 159].
– Fusion Subset Activated: Selects which area of the left and the right illumination
in X image should be used for fusion. Only available for Mean or Maxi-
mum fusion. See also the description in Dual Side Fusion [} 159].
– Ignore Sample Activated: Uses a function to find incorrectly labeled fiducials and
eliminate them for registration.
– Expand to Activated: The volume of the resulting data set after registration and
maximum fusion is determined and additional black pixels (x,y) and z-slices are
value added to the views during registration. This prevents loss of data for
later fusion.
– Scaling X=Y=Z Activated: The dimensions are downsized to the smallest to have the
same size. If the scaling in Deconvolution or PSF settings is changed,
the latter overrides the scaling settings here.
Parameter Description
Input settings A subset of a present time series can be selected for the Registration
and Fusion processing.
Parameter Description
– Scaling correc- Sets the geometric scaling between X/Y and Z.
tion (µm)
Fusion Fuses registered views in one z-stack or one z-stack over time.
– Mean Fusion The pixel in the fused image is determined by averaging the intensity
level of the pixels for the involved views.
– Mean Fusion + Only available if the dataset has not undergone deconvolution before.
Multiview DCV The pixel in the fused image is determined by averaging the intensity
level of the pixels of the involved views. After fusion, a deconvolution
is performed. The Deconvolution [} 95] interface is displayed if se-
lected.
– Fusion Subset Activated: Defines which portion of the z-stacks of the contributing
in Z views of the experiment are used for the fusion process.
– Current View Selects the current view. For each view, the z-range can be set via
slider, spin boxes, or mouse click in the image. Registration will not be
affected by the z-range settings.
Note that for later deconvolution the calculated point spread function
(PSF) of a Multiview experiment assumes that all views equally con-
tribute to all areas of the image. This is not the case when using Sub-
set Z and might result in artefacts of the deconvolution result.
– Save unfused Activated: Saves the registered image in addition to the fused image.
data
See also
2 Deconvolution (adjustable) [} 95]
Parameter Description
Parameter Settings Setting of alignment parameters.
– Registration Selects the registration channel. Use a channel with fiducials to which
Channel other channels will be registered to.
Note: A selection of more than one channel for the registration
process is possible. For each selected channel an individual registra-
tion will be performed and, therefore, it is advisable to use one chan-
nel for the final registration. To test which channel will lead to the
best results, all channels can be selected for later comparison of the
results.
Parameter Description
– Preregistra- Preregistration performs the process in a 2D maximum intensity pro-
tion jection of the z-stacks.
It is advisable to first perform a Pre-Registration to move similar
structures, which might facilitate the Registration, already into close
proximity.
– Hardware Ro- The views are rotated solely based on the information from the rota-
tation tion positioning of the sample holder which is saved in the metadata.
– Trans Hard- The views are rotated solely based on the information from the rota-
ware Rotation tion positioning of the sample holder which is saved in the metadata.
Additionally, the x, y, and z axis are moved to match the views.
– Correlation The views are rotated solely based on the information from the rota-
Hardware Ro- tion positioning of the sample holder which is saved in the metadata.
tation Additionally, the x, y, and z axis are moved to match the views. Cross-
correlation is used to determine the translation in x, y, and z.
– Use Pre Regis- No Registration process is performed and only the result of the Pre-
ter Only Registration is used.
– Trans Rota- The registration corrects along the x, y, and z axis and also the rota-
tion tion along x and y.
– Dual side fu- Merges the image done with the lightsheet from the right side with
sion the image of the left side. See also Dual Side Fusion [} 159].
– Fusion Subset Activated: Selects which area of the left and the right illumination
in X image should be used for fusion. Only available for Mean or Maxi-
mum fusion. See also the description in Dual Side Fusion [} 159].
– Ignore Sample Activated: Uses a function to find incorrectly labeled fiducials and
eliminate them for registration.
– Expand to Activated: The volume of the resulting dataset after registration and
maximum fusion is determined and additional black pixels (x,y) and z-slices are
value added to the views during registration. This prevents loss of data for
later fusion.
– Scaling X=Y=Z Activated: The dimensions are downsized to the smallest to have the
same size. If the scaling in Deconvolution or PSF settings is changed,
the latter overrides the scaling settings here.
Parameter Description
Deactivated: One transformation matrix is generated for all the fol-
lowing time points based on the first time point. As a result, process-
ing for registration has only to be done once which reduces process-
ing time.
Parameter Description
Input settings A subset of a present time series can be selected for the Registration
and Fusion processing.
Fusion Fuses registered views in one z-stack or one z-stack over time.
– Mean Fusion The pixel in the fused image is determined by averaging the intensity
level of the pixels for the involved views.
– Mean Fusion + Only available if the dataset has not undergone deconvolution before.
Multiview DCV The pixel in the fused image is determined by averaging the intensity
level of the pixels of the involved views. After fusion, a deconvolution
is performed. The Deconvolution [} 95] interface is displayed if se-
lected.
– Fusion Subset Activated: Defines which portion of the z-stacks of the contributing
in Z views of the experiment are used for the fusion process.
– Current View Selects the current view. For each view, the z-range can be set via
slider, spin boxes, or mouse click in the image. Registration will not be
affected by the z-range settings.
Note that for later deconvolution the calculated point spread function
(PSF) of a Multiview experiment assumes that all views equally con-
tribute to all areas of the image. This is not the case when using Sub-
set Z and might result in artefacts of the deconvolution result.
– Save unfused Activated: Saves the registered image in addition to the fused image.
data
When the registration for the data set is already determined and the transformation matrix is
saved as an .xml file (Filename.czi.xml), this option can be selected.
Parameter Description
Parameter Settings Setting of alignment parameters.
– Dual side fu- Merges the image done with the light sheet from the right side with
sion the left side image. See also Dual Side Fusion [} 159].
– Fusion Subset Activated: Selects which area of the left and the right illumination
in X image should be used for fusion. Only available for Mean or Maxi-
mum fusion. See also the description in Dual Side Fusion [} 159].
– Expand to Activated: The volume of the resulting data set after registration and
maximum fusion is determined and additional black pixels (x,y) and z-slices are
value added to the views during registration. This prevents loss of data for
later fusion.
Parameter Description
Input settings A subset of a present time series can be selected for the Registration
and Fusion processing.
Fusion Fuses registered views in one z-stack or one z-stack over time.
– Mean Fusion The pixel in the fused image is determined by averaging the intensity
level of the pixels for the involved views.
– Mean Fusion + Only available if the dataset has not undergone deconvolution before.
Multiview DCV The pixel in the fused image is determined by averaging the intensity
level of the pixels of the involved views. After fusion, a deconvolution
is performed. The Deconvolution [} 95] interface is displayed if se-
lected.
– Fusion Subset Activated: Defines which portion of the z-stacks of the contributing
in Z views of the experiment are used for the fusion process.
– Current View Selects the current view. For each view, the z-range can be set via
slider, spin boxes, or mouse click in the image. Registration will not be
affected by the z-range settings.
Note that for later deconvolution the calculated point spread function
(PSF) of a Multiview experiment assumes that all views equally con-
tribute to all areas of the image. This is not the case when using Sub-
set Z and might result in artefacts of the deconvolution result.
Parameter Description
– Save unfused Activated: Saves the registered image in addition to the fused image.
data
The recorded views of an experiment are registered manually. The views of one channel at one
time point of the experiment are the basis for the procedure. The z-stacks of the individual views
are rotated based on the rotation information of the files metadata and transformed into chan-
nels, resulting in one maximum intensity projection image. Within this newly generated image,
the views, now behaving as channels, can be moved in x, y, z and rotated to overlay each other.
When all views are laid on top of each other, the Apply button uses the adjusted parameters to
register, and when required to fuse, the views for all time points of the experiment. An .xml file is
created which can be used for the Use registration from file option. This .xml file can be found
along with the processed data in the result folder.
Parameter Description
Parameter Settings Setting of alignment parameters.
– Registration Selects the registration channel that should be used to manually over-
Channel lay all views of the image.
Note: Only one channel can be used for this procedure, all other
channels will not be displayed.
– Current Time Indicates the selected time point of a time series as set with the time
Point slider in the Dimensions tab.
– Front View Converts views into channels and generates a maximum intensity pro-
jection along the z-axis of channels displaying the x- (horizontal) and
y- (vertical) axis of channels representing.
Note: It is advisable to assign different colors to the channel to ease
up alignment process. Channels can be turned on/off in the Dimen-
sions tab.
– Side View Converts views into channels and generates a maximum intensity pro-
jection along the x-axis displaying the z- (horizontal) and y- (vertical)
axis of channels representing.
Note: It is advisable to assign different colors to the channel to ease
up alignment process. Channels can be turned on/off in the Dimen-
sions tab.
– Alignment X Shifts the selected view against View 1 in X. Not available for Side
View.
– Alignment Z Shifts the selected view against View 1 in Z. Not available for Front
View.
Parameter Description
– Rotation Rotates the selected view around the position of the red cross overlay
within the image of the 2D tab.
– Reset Moves all views back to the position where the maximum intensity
projection was generated.
– Preview Produces a new image of the channel and the time point which is reg-
istered with the provided settings. The image will contain fused views
when Mean Fusion is selected for Fusion.
– Dual side fu- Merges the image done with the lightsheet from the right side with
sion the image of the left side. See also Dual Side Fusion [} 159].
– Fusion Subset Activated: Selects which area of the left and the right illumination
in X image is used for fusion. Only available for Mean or Maximum fu-
sion. See also the description in Dual Side Fusion [} 159].
– Expand to Activated: The volume of the resulting dataset after registration and
maximum fusion is determined and additional black pixels (x,y) and z-slices are
value added to the views during registration. This prevents loss of data for
later fusion.
– Scaling X=Y=Z Activated: The dimensions are downsized to the smallest to have the
same size. If the scaling in Deconvolution, PSF settings is changed, the
latter will override the scaling settings here.
Parameter Description
Input settings A subset of a present time series can be selected for the Registration
and Fusion processing.
Fusion Fuses registered views in one z-stack or one z-stack over time.
– Mean Fusion The pixel in the fused image is determined by averaging the intensity
level of the pixels for the involved views.
– Mean Fusion + Only available if the dataset has not undergone deconvolution before.
Multiview DCV The pixel in the fused image is determined by averaging the intensity
level of the pixels of the involved views. After fusion, a deconvolution
is performed. The Deconvolution [} 95] interface is displayed if se-
lected.
Parameter Description
– Fusion Subset Activated: Defines which portion of the z-stacks of the contributing
in Z views of the experiment are used for the fusion process.
– Current View Selects the current view. For each view, the z-range can be set via
slider, spin boxes, or mouse click in the image. Registration will not be
affected by the z-range settings.
Note that for later deconvolution the calculated point spread function
(PSF) of a Multiview experiment assumes that all views equally con-
tribute to all areas of the image. This is not the case when using Sub-
set Z and might result in artefacts of the deconvolution result.
– Save unfused Activated: Saves the registered image in addition to the fused image.
data
The function allows you to perform a Maximum Intensity Projection (MIP) for selected dimensions.
Parameter Description
Settings Allows handling predefined settings (templates). See also General Set-
tings [} 83].
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
Coordinate Selects the coordinate for which the maximum intensity projection
(MIP) is performed.
– Z MIP of a z-stack.
– C MIP of channels.
– V MIP of views.
Parameter Description
Range Sets the range of the dimension to be used for the maximum intensity
projection. The range can be set with the two slider bars or the spin
boxes.
3.11.8.10 Segmentation
3.11.8.10.1 Canny
Canny detects edges in an image. This function detects relatively thick contours at the edge of
bright regions.
Parameter
Parameter Description
Sigma Degree of smoothing of input image before detection of edges. A
Gauss filter is used as a smoothing function. The smoothing factor
can be used to influence the sensitivity of recognition. If a low value is
set, lots of edges are detected. Fewer edges are detected with a high
value. If the value 0 is set, no smoothing is performed.
Threshold Steepness of the edges to be detected. Low values mean "flat" edges
with a wide transition area between two regions. In this case lots of
edges are detected. If high values are used, fewer edges are detected,
as only steep transition areas are interpreted as edges.
3.11.8.10.2 Marr
This method detects edges or regions in an image. In contrast to Valleys and Canny, here a
Laplace filter is calculated, followed by smoothing using a Gauss filter, and the edges (Display
Mode >Edges) or regions (Display Mode > Regions) are detected.
Parameter
Parameter Description
Sigma Degree of smoothing of input image before detection of edges or re-
gions. A Gauss filter is used as a smoothing function. The smoothing
factor can be used to influence the sensitivity of recognition. If a low
value is set, lots of edges are detected. Fewer edges are detected with
a high value. If the value 0 is set, no smoothing is performed.
Display Mode
3.11.8.10.3 Threshold
This function performs a segmentation based on the definition of a brightness range (separated
according to color channels (red, green, blue)) for the regions to be segmented. All pixels whose
color values lie within the defined color range are marked as region pixels in the resulting image.
All the pixels whose color values lie outside the defined color range are marked as background
pixels (black).
In the resulting image, the color values of the region pixels can either be set permanently to white
or adopted unchanged. If you set the region pixels permanently to white, the result is a binary im-
age, which can then be used as a mask image for a subsequent automatic measurement.
Parameters
Parameter Description
Level Low Determines the lower brightness threshold for the regions to be seg-
mented. All the pixels whose gray values lie below this threshold value
are marked as background pixels (black).
Level High Determines the upper brightness threshold for the regions to be seg-
mented. All the pixels whose gray values lie above this threshold value
are marked as background pixels (black).
The following parameters are only visible if the Show All mode is activated:
Parameter Description
Create binary Activated: The resulting image is a binary image. Pixels within the
calculated gray level range are set to the maximum gray value (white),
whilst pixels outside it are set to the gray value 0.
Deactivated: The resulting image is of the same type as the input im-
age. Pixels within the calculated gray level range are set to the origi-
nal gray value. Pixels outside it are set to 0.
Invert result Activated: Inverts the effect of the function. The segmented regions
will be given the value 0, and all other pixels the gray value white or
the gray value/color of the input image.
This method performs an automatic gray value segmentation. The function calculates the two
minimums in the individual channels in the gray value histogram of the input image (Input) and
uses these for the segmentation.
The following parameters are only visible if the Show All mode is activated:
Parameter Description
Method
- Otsu For all possible threshold values, the Otsu method calculates the vari-
ance of intensities on each side of the respective threshold. It mini-
mizes the sum of the variances for the background and the fore-
ground.
Parameter Description
- Iso Data The threshold value lies in the middle between two maximums in the
histogram.
- Triangle The threshold value is calculated from the sum of the average and
Threshold three times the sigma value of the histogram distribution.
- Three Sigma
Threshold
Parameters
Parameter Description
Create binary Activated: The resulting image is a binary image. Pixels within the
calculated gray level range are set to the maximum gray value (white),
whilst pixels outside it are set to the gray value 0.
Deactivated: The resulting image is of the same type as the input im-
age. Pixels within the calculated gray level range are set to the origi-
nal gray value. Pixels outside it are set to 0.
Invert result Activated: Inverts the effect of the function. The segmented regions
will be given the value 0, and all other pixels the gray value white or
the gray value/color of the input image.
This method performs an adaptive gray value segmentation. This procedure is particularly well
suited to the segmentation of small structures against a varying background.
The function initially applies a low pass filter and then subtracts this low-pass-filtered image from
the input image. The effect of this function mainly depends on the size of the filter matrix: Select
a low value for Size to segment small regions or regions with low gray value contrast from the
background. Select a higher value for Size to segment larger regions from the background.
Parameters
Parameter Description
Kernel Size Matrix size of the low pass filter in x- and y-direction symmetrically
around the pixel in question. Determines the extent of the smoothing
effect. As the affected pixel is at the center, the edge length of the fil-
ter matrix is always an odd number. If an even number is entered via
the keyboard, the value is always set to the next highest odd number.
Threshold This value defines the gray value difference between the regions to be
detected and the background. Segmented pixels are set to the maxi-
mum gray value (white), whilst other pixels are set to the gray value 0.
The following parameters are only visible if the Show All mode is activated:
Parameter Description
Create binary Activated: The resulting image is a binary image. Pixels within the
calculated gray level range are set to the maximum gray value (white),
whilst pixels outside it are set to the gray value 0.
Parameter Description
Deactivated: The resulting image is of the same type as the input im-
age. Pixels within the calculated gray level range are set to the origi-
nal gray value. Pixels outside it are set to 0.
Invert result Activated: Inverts the effect of the function. The segmented regions
will be given the value 0, and all other pixels the gray value white or
the gray value/color of the input image.
3.11.8.10.6 Valleys
This method detects dark lines (gray value valleys) on a bright background and contours between
bright regions.
Parameter
Parameter Description
Sigma Degree of smoothing of input image before detection of valleys. The
smoothing factor can be used to influence the sensitivity of recogni-
tion. If a low value is set, lots of valleys are detected. Fewer valleys are
detected with a high value.
3.11.8.11 Sharpen
3.11.8.11.1 Delineate
This method enhances the edges of individual regions in an image. It corrects the halo effect and
only affects edges.
Parameters
Parameter Description
Threshold Enter the threshold value for edge detection using the slider or spin
box/input field. The threshold value should correspond roughly to the
gray value difference between objects and the background.
Size Enter the size of the edge detection filter using the slider or spin box/
input field. The value should correspond to the size of the transition
area between objects and the background.
This method allows you to enhance contours in an image and emphasize regions in which gray
values change. The function is suitable for visually emphasizing fine structures in an image.
Parameters
Parameter Description
Strength Here you can adjust the factor for increasing edge enhancement.
Normalization Here you can select how the gray/color values that exceed or fall
short of the value range should be dealt with.
- Clip Automatically sets the gray levels that exceed or fall short of the pre-
defined gray value range to the lowest or highest gray value (black or
white). The effect corresponds to underexposure or overexposure. In
certain circumstances some information may therefore be lost.
- Automatic Normalizes the gray values automatically to the available gray value
range.
- Wrap If the result is larger than the maximum gray value of the image, the
maximum gray value + 1 is deducted from this value.
- Shift Normalizes the output to the value "gray value + maximum gray
value/2".
With this method, you can combine the sharp regions from the individual sections of a z-stack im-
age to create a single sharp image. This enables you to display a considerably larger depth of field
than is possible on a microscope.
Parameter Description
Method
- Wavelets Uses a wavelet transformation to detect the sharpest areas in the im-
ages.
- Contrast For this method, the value is the difference between the brightest and
the darkest pixel value within the “Kernel”.
- Maximum Images with the brightest and darkest pixels are generated first. Of
Projection these images the image with the higher variance is used as the result-
ing image. Usually, best suited for fluorescence images.
- Variance Calculates the variance of the pixel values within the “Kernel”. Usu-
ally, best suited for bright field images.
- Fast EDF Uses a method for fast EDF to detect the sharpest areas in the images.
Z-Stack Alignment This parameter should not be used for processing whole slide image
scans. It is only required when processing images stemming from a
stereo microscope.
Not visible if you have selected Maximum Projection as Method.
Selects if the z-stack image should be aligned before the calculation
and with what quality level. If you acquire images with a stereo mi-
croscope, the images are displaced against each other. This displace-
Parameter Description
ment can be corrected. The higher the selected quality of alignment,
the longer takes the calculation. Select No Alignment, if you acquire
images with a compound microscope.
- No Alignment The z-stack image is not aligned before the calculation. Select this set-
ting if the z-stack image has not been acquired using a stereo micro-
scope.
- Normal Uses an alignment with high speed for normal image quality.
- High Uses an alignment with lower speed for high image quality.
- Highest Uses an alignment with the lowest speed with best image quality.
Process Tiles Sepa- Only visible if you have selected Maximum Projection as Method.
rately Activated: Processes each tile region separately.
Parameter Description
Default values (depending on the selected preset):
§ Preset = Default: 1
§ Preset = Small Structures: 0
§ Preset = Medium Structures: 2
§ Preset = Large Structures: 2
See also
2 Using Direct Processing [} 230]
By using this method, you can increase the impression of sharpness in an image and consequently
obtain an image which shows structures with more contrast. The function enhances contrast es-
pecially for smaller structures.
Parameter Description
Strength Enter the strength of the Unsharp Masking using the slider or spin
box/input field. The higher the value selected, the greater the extent
to which small structures are enhanced.
Sigma Adjusts the sigma value derived from Gauss [} 179] filtering. Reduces
noise in an image. Each pixel is replaced by a weighted average of its
neighbors. The neighboring pixels are weighted in accordance with a
two-dimensional Gauss bell curve.
Color Mode Select the desired color mode from the drop-down list.
- RGB Calculates the sharpness for each color channel individually. The color
saturation and the color of structures may be changed, and color
noise may occur.
- Luminance Only calculates the sharpness on the basis of the brightness signal de-
tected. This mode does not show any color noise and changes the
color saturation accordingly.
Threshold Mode Here you can select a setting from the drop-down list for calculating
the boundary between the sharpened image regions.
It is only effective if the value for the Lower Threshold Value pa-
rameter is not equal to 0 or the value for the Upper Threshold
Value parameter is not equal to 100.
Parameter Description
- Linear Calculates a linear course.
Threshold Low Enter the lower threshold value using the slider or spin box/input field.
This determines the lower limit from which existing contrast structures
are changed.
Threshold High Enter the upper threshold value using the slider or spin box/input
field. This prevents structures in the image already featuring high con-
trast from being further enhanced unnecessarily.
Clip To Valid Bits Activated: The value range of the gray/color values of the output im-
age is adjusted to the value range of the input image.
See also
2 General Settings [} 83]
2 Using Direct Processing [} 230]
3.11.8.12 Smooth
This method allows you to reduce noise in an image. Each pixel is replaced by a weighted average
of its neighbors. The weighting factors are calculated from the binomial coefficients in accordance
with the filter size. The binomial filter is very similar to a Gaussian filter in its effect.
Parameter
Parameter Description
Kernel Size Here you can adjust the size of the filter matrix. If the Show All mode
is activated, you can adjust the values in X, Y and Z direction individu-
ally.
3.11.8.12.2 Denoise
This method removes noise from images using wavelet transformations or total variation. The
process of denoising an image with wavelet transformations can be broken down into the follow-
ing three parts:
Parameter Description
Method
Parameter Description
- Complex The Dual Tree Complex Wavelet transform provides better results due
wavelets to the fact that it is nearly direction invariant and makes more direc-
tional sub bands available. The results will be less prone to block-arte-
facts. However, this method is computationally more intense and
therefore takes longer.
- Real wavelets The real wavelet transform only considers three sides (XYZ) and is
therefore faster. However, the result can show block artefacts.
- Total variation An algorithm based on A. Chambolle, "An Algorithm for Total Varia-
tion Minimization and Applications", J. Math. Imaging and Vision 20
(1-2): 89-97, 2004.
It uses the L1 norm for optimization. Typically, total variation gener-
ates small plateaus with constant gray values. The size of each of the
small plateaus depends on the Strength setting.
Strength Here you adjust the strength with which the function is applied.
See also
2 Using Direct Processing [} 230]
3.11.8.12.3 Gauss
This method allows you to reduce noise in an image. Each pixel is replaced by a weighted average
of its neighbors. The neighboring pixels are weighted in accordance with a two-dimensional
Gauss bell curve.
Parameter Description
Sigma Here you can adjust the sigma value. If the Show All mode is acti-
vated, you can adjust the values in each dimension individually.
3.11.8.12.4 Lowpass
This method allows you to reduce noise in an image. Each pixel is replaced by the average of its
neighbors. The size of the area of the neighboring pixels considered is defined by a quadratic filter
matrix. The modified pixel is the central pixel of the filter matrix.
Parameter
Parameter Description
Count Enter the number of repetitions using the slider or input field. The
function can be applied several times in succession to the result of the
filtering. This intensifies the effect accordingly.
Parameter Description
Kernel Size Here you can adjust the size of the filter matrix. If the Show All mode
is activated, you can adjust the values in X, Y and Z direction individu-
ally.
3.11.8.12.5 Median
This method allows you to reduce noise in an image. Each pixel is replaced by the median of its
neighbors. The size of the area of the neighboring pixels considered is defined by a quadratic filter
matrix. The modified pixel is the central pixel of the filter matrix. The median is the middle value
of the gray values of the pixel and its neighbors sorted in ascending order.
Parameter Description
Kernel Size Here you can adjust the size of the filter matrix. If the Show All mode
is activated, you can adjust the values in X, Y and Z direction individu-
ally.
3.11.8.12.6 Sigma
This method allows you to reduce noise in an image. Each pixel is replaced by the average of its
neighbors. The size of the area of the neighboring pixels considered is defined by a quadratic filter
matrix. The modified pixel is the central pixel of the filter matrix. To calculate the average, only
the gray values that lie within a defined range (+/- sigma) around the gray value of the central
pixel are taken into consideration. As a result, fine object structures are not blurred; only the gray
levels in image regions that belong together are adjusted.
Parameter
Parameter Description
Sigma Enter the sigma value using the slider or input field.
Parameter Description
Kernel Size Here you can adjust the size of the filter matrix. If the Show All mode
is activated, you can adjust the values in X, Y and Z direction individu-
ally.
With this function you can remove single pixel phenomena, such as those that occur in the case
of clocking induced charge with EMCCDs and as radio telegraph signal noise with CMOS sensors.
It is a filter which analyzes the input image and removes pixels, whose intensity value diverges
strongly from the median intensity of its neighboring pixels.
The filter analyzes the input image and removes pixels that are "much" larger than the median of
their neighbors. The algorithm works as follows:
1. Sort all 9 pixels in a 3 x 3 neighborhood.
2. Determine the median intensity value of the sorted pixels.
3. Multiply the median by the threshold factor to get the limit.
4. If the center pixel intensity is larger than the limit, replace the pixel with the median.
Larger values of the threshold factor increase the value of the intensity limit and decrease the
number of pixels that are replaced. The default value of 1.5 is arbitrary, but seems to remove
charge induced noise from images acquired using cameras with EM gain capabilities. This filter
can also be used to remove hot pixels from images.
Parameter
Parameter Description
Threshold Here you adjust the threshold value.
3.11.8.12.8 Whitening
Info
Prerequisite
This function is only available if you have installed the 3rd party Python Tools during the in-
stallation of ZEN software.
This method removes "correlated noise components" from an image, resulting in an image with
so-called "white noise", where the noise in every pixel is not correlated with the noise found in
neighboring pixels. Such an ideal noise is also called "White Noise". This function should be used
to pre-process image data that is later used to train an Noise2Void denoising model using the ZEN
Intellesis Denoising. Keep in mind that applying a model trained on "whitened" datasets should be
only applied to "whitened" images.
Parameter Description
Processing Direc- Sets the direction in which the data is processed.
tion
See also
2 Creating and Training an Intellesis Denoising Model [} 514]
2 Using a Trained Model for Denoising [} 516]
With this function a smoothing effect can be achieved due to averaging out noise. It therefore
calculates the gliding average of a time series image, taking into account the defined number of
time points according to the following schematic:
Input Image: SizeT = 6
Averaging Length: AvL = 3
Output image: SizeT(output) = SizeT – AvL + 1 = 6 -3 +1 = 4
Parameter
Parameter Description
Average Length Specifies the number of images used to determine the mean value.
The maximum value correlates with the number of time points.
Scaling Factor The preset value is 1. Values > 1 can be applied for images with low
intensity. In this case all pixel values are multiplied by the specified
factor.
3.11.8.13.2 Kymograph
This method creates a Kymograph. The input image has to be a time series image containing a
line-like graphical element (a line, arrow, curve or polygon curve) which is not locked.
Parameter Description
Graphic Tool Selects the desired graphic tool from the list. All graphic tools that can
be used are visible in the dropdown. Note that the tool must be se-
lected in the image as well.
Width Adjusts the width of the graphic tool (in pixel). This determines which
pixels are used to calculate the average gray value along the width.
This method allows you to automatically align individual time points in order to compensate for
shifts between time points.
Info
For the alignment function to work, the presence of immobile and clearly distinguishable ob-
ject structures in the time series is required. Also, when aligning z-stacks over time, you should
always use the z dimension from the Third Dimension drop-down list. Otherwise, each z-
plane would be aligned over time potentially leading to z-stack artefacts.
Parameter Description
Registration Only visible if Load Transformation is deactivated.
Method Selects the method or a combination of methods which is used to
align the images.
Parameter Description
- Translation The neighboring sections of the image are shifted in relation to each
other in the X and Y direction.
- Rotation The neighboring sections of the image are rotated in relation to each
other.
- Skew Scaling The neighboring sections of the image are corrected for skewness/
shearing.
- Affine The neighboring sections of the image are shifted in X and Y direc-
tion, rotated and the magnification is adjusted from section to sec-
tion.
– Low This is the most imprecise but also the fastest calculation of the align-
ment. It uses a low number of levels (2) of the image pyramid for the
alignment calculation.
– Medium This is a more precise but also a slower calculation of the alignment
than the one before. It uses a medium number of levels (3) of the im-
age pyramid for the alignment calculation.
– High This is a more precise but also a slower calculation of the alignment
than the two before. It uses a high number of levels (4) of the image
pyramid for the alignment calculation.
– Highest This is the most precise but also the slowest calculation of the align-
ment. It uses the highest number of levels of the image pyramid for
the alignment calculation.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Crop Output Only visible if your image does not contain multiple tiles or scenes.
Activated: The image is cropped and only keeps the section which is
covered by the entire dimension (e.g. by all z-slices or time points).
The output image can be of different size than the input image.
Parameter Description
Deactivated: The output image is not cropped and keeps the size of
the input image. The image borders might get filled with a default
pixel value.
Single Component Only visible if you have selected an input image which contains di-
mensional components (e.g. scenes, z-planes, time points) and if this
component is not selected as Third Dimension.
Activated: Displays a control to select the dimensional component
that is used to calculate the alignment transformation matrix.
Deactivated: Calculates an alignment transformation matrix for each
individual component (e.g. for each individual scene).
Single Tile Only visible for images with multiple tiles and if Single Component is
activated.
Activated: Displays the Tile Component control.
– Tile Compo- Selects the tile component that is used to calculate the alignment
nent transformation matrix.
See also
2 General Settings [} 83]
Info
If the input images have varying dimensions, the resulting image document can have gaps for
certain dimension coordinates.
This function joins two images to form a new time series image. The function offers an advanced
option for mapping the channels of the two images. This is useful for special situations that might
require manual channel mapping, e.g. if the same dye was used for different channels, or if the
dye name is missing in the image metadata.
Parameter Description
Sort Time Points Activated: Sorts the time points of the resulting image by acquisition
by Acquisition time.
Time Deactivated: Does not sort by acquisition time. Note that in case the
Parameter Description
first input image was acquired after the second file, or if both images
overlap in acquisition time, the relative times of some time points in
the output image could be negative.
Define Channel Activated: Allows you to map the channels of the first input image to
Mapping the channels of the second image. If you do not want to map one
particular channel, you also have the option No Mapping. The result-
ing image will have all channels of the first image and all channels
from second image that do not have a mapping.
This function calculates the first and second order differential of a time lapse image according to
the following formula and schematic:
First Order Differential:
Output[t] = Input[t+1] – Input[t-1]
-> The difference between consecutive pixels is not calculated so that the output is not direc-
tional. The first order differential represents the Speed.
Second Order Differential:
Output[t] = Input[t-1] + Input[t+1] – 2 x Input[t]
-> Second order differential is also known as the "Laplacian" and represents the Acceleration. It
enhances the fine details in the image (including noise). The smoothing kernel helps reduce this
noise.
Parameter
Parameter Description
Derivative Here you can select whether to calculate the first (speed) or second
(acceleration) order differential.
Smoothing Indicates the iterative, binomial smoothing filter. This reduces noise in
the differential images, whilst retaining maximums and minimums.
Value range: 0 – 50
Normalization Defines what to do with negative values resulting from the calcula-
tion.
§ Clip: negative values are set to 0.
§ Absolute: negative values are used positively.
This function stitches heterogeneous CZI images together to create a new, single homogeneous
time series containing all dimensions and time points in their proper order. This differs from the
Time Concatenate function, which simply pastes one time lapse series to the end of another
without regard for the proper time order or channel content.
Missing images can either be filled with copies of the previous valid image in the series or filled
with black images.
When combining Z-stack time series with non-Z-stack time series a choice can be made between
either using only the center plane of the Z-stack or creating an extended focus projection of the Z-
stack before stitching the images together.
Parameter
Parameter Description
Fill Missing with
- Previous Fills a missing dimension index with a copy of the last existing image
from that index.
Z-Stacks
- Collapse (EDF) Reduces a z-stack with an extended focus function to a single plane
image which is then added to the output.
- Collapse (Center Only uses the center plane from a z-stack for the output image.
Plane)
- Expand Copies the z-stack to the output unchanged, fills the missing indices
according to the setting in the Fill Missing with dropdown list.
3.11.8.14 Utilities
This method allows you to combine two input images that have different channels but otherwise
have the same dimension (Z-stack, tile, scene). An image is produced that contains all the chan-
nels of the input images.
If the two input images differ from one another in the dimensions Z-stack, time series, tiles or
scene, input image 1 and input image 2 are copied into the output image as two separate blocks.
With this method you can access the super-resolution data in images acquired with Airyscan.
Info
Note that starting with ZEN 2.5 blue edition, the black border of the processed image is auto-
matically removed. Hence the resulting image will be smaller by 24 pixels in X and Y dimen-
sion.
Parameters
Parameter Description
3D Processing This option is only available for images with 5 or more z-positions.
If activated, this option improves the resolution in axial and lateral di-
rection. The data set needs to have at least 5 z-sections acquired with
an optimal step size. 3D Processing is slower than 2D Processing. For
3D Processing, the whole z-stack (single channel and time point)
needs to fit into the physical memory.
2D SR Processing This function is available for 2D images only. It enhances the 2D reso-
lution.
Note this only results in increased superresolution when images are
acquired with optimal settings and sufficient signal.
Auto Filter If activated, a suitable Super Resolution parameter for the Airyscan
processing is automatically determined for the selected data set.
To manually adjust the Super Resolution parameter, deactivate the
checkbox. Then determine suitable values by using the corresponding
function in the Airyscan viewer in the Airyscan view. Note that the
preview is only suitable for 2D Airyscan processing. A preview for 3D
Airyscan processing is not available. For adjusting 3D processing pa-
rameters, you should first process your data set once using the Auto
Filter and then check the value that was actually applied by the
Airyscan processing function. This value is stored in the metadata of
the processed image and can be accessed using the Info view.
Note: High strength might look attractive at some images, Z planes
or color channels, but other filtering artefacts might occur which ap-
pear like small rings in the image. Also, the results will become very
sharp, but grainy. So carefully check your image data in order to avoid
such artefacts.
Adjust per Channel Only visible, when the Auto Filter is deactivated.
Only available for images with two or more Airyscan channels.
If activated, you can manually set channel-specific Airyscan processing
parameters.
Strength Use this option for an increased (high) or decreased (low) strength of
the automatically assigned filter value. This is especially useful for 3D
processing, as the 2D preview of the processing filter value in the
Airyscan viewer does not allow to conclude the result after a 3D data
processing.
The increment of this parameter is ± 0.4 compared to the standard
auto Airyscan processing. This setting is not available when manual
processing strength is selected.
Parameter
Find the description of the parameters under: Deconvolution (adjustable) parameters. This method
is available for batch processing as well.
This method accepts ApoTome raw data only. The settings are similar to the ones on the Apo-
Tome tab (view option for ApoTome images). The function is also available for batch processing,
which makes it easy to convert a series of ApoTome RAW data images into deconvolved images.
Parameter Description
Display Mode ApoTome images are acquired as raw data. The Display Mode sets
how the image is calculated and displayed.
– Raw data Displays the raw data as output image and disables all other parame-
ters of the function.
– Local Bleach- Corrects the bleaching for each pixel individually (default setting). This
ing is usually the best method.
– Phase Errors Corrects phase errors in the image without additional bleaching cor-
rection.
– Phase Errors Corrects phase errors in the image with additional global bleaching.
and Global
Bleaching
– Phase Errors Corrects phase errors in the image with additional local bleaching.
and Local
Bleaching
Parameter Description
Normalization Here you can select how the gray/color values that exceed or fall
short of the value range should be dealt with. If you use this method
with Direct Processing, only the Clip method is available and prese-
lected.
– Clip Automatically sets the gray levels that exceed or fall short of the pre-
defined gray value range to the lowest or highest gray value (black or
white). The effect corresponds to underexposure or overexposure. In
certain circumstances some information may therefore be lost.
– Automatic Normalizes the gray values automatically to the available gray value
range.
See also
2 Using Direct Processing [} 230]
2 Apotome Tab [} 1046]
This method calculates a histogram distribution for selected measurement parameters of a mea-
surement data table.
Parameters
Parameter Description
Columns Define the measurement parameters for classification by entering the
column numbers freely, e.g. 1,3,5, or 1-6 or 1,3-7,8.
Clicking on the button to open the Select columns dialog. Here
the column names of the data can be activated or deactivated by
clicking on the relevant checkbox.
Class Boundaries Select here, how you want the class boundaries of the calculated his-
togram to be determined.
- >=,…,< A numerical value falls into the histogram class if it is greater than or
equal to the lower class boundary and less than the upper class
boundary.
- >,…,=< A numerical value falls into the histogram class if it is greater than the
lower class boundary and less than or equal to the upper class bound-
ary.
Automatic Classifi- Activated: The class boundaries are calculated automatically from the
cation data. The value range from the lowest to the highest data value is di-
vided into as many classes of equal width as you have set in the Class
Number input field.
Example:
Minimum value is 0
Maximum value is 10000
Range is 10000 units
Class Count is 4
Then the class boundaries are as follows:
Parameter Description
Class 1: 0 .. 2500
Class 2: 2501 .. 5000
Class 3: 5001 .. 7500
Class 4: 7501 .. 10000
Display Mode Select here, how you want the values of the histogram to be calcu-
lated.
- Count The histogram indicates how many data sets fall into the relevant
class, it contains the frequency of the values in the class concerned.
- Count Cumu- The histogram cumulates the counts of values in each class. Class 1
lative contains the number of values for class 1, class 2 contains the sum of
the values from class 1 and class 2, class 3 contains the sum of the
values from class 2 and class 3, etc.
- Percentage The histogram indicates what percentage of the data sets fall into the
relevant class, it therefore contains the percentage share of the values
in the class concerned.
- Sum The histogram contains the sum of the numerical values of the data
sets that fall into the relevant class, the values of the data sets that fall
into the class concerned are therefore added together.
- Sum Cumula- The histogram cumulates the sums of the values in each class. Class 1
tive contains the sum of the numerical values from class 1, class 2 con-
tains the sum of the numerical values from class 1 and class 2, class 3
contains the sum of the numerical values from class 2 and class 3, etc.
The last class therefore contains the sum of all individual values.
- Percentage The histogram indicates the percentage share of the total numerical
Sum values in the relevant class.
- Percentage The histogram cumulates the percentage of the sums of values of all
Sum Cumula- data points which belong to the class. Class 1 contains the percentage
tive of the total numerical values from class 1, class 2 contains the sum of
Parameter Description
the percentages of the total numerical values from class 1 and class 2,
class 3 contains the sum of the percentages of the total numerical val-
ues from class 2 and class 3, etc.
The last class therefore contains 100%.
This method allows you to change the pixel type of an image. This can be useful if you want to
compare or combine images that have different pixel types.
Parameters
Parameter Description
Pixel Format Select the desired pixel format from the dropdown list.
- 8 Bit B/W The output image is a monochrome image, the whole-number gray
values of which can lie in the range from 0 to 255.
- 16 Bit B/W The output image is a monochrome image, the whole-number gray
values of which can lie in the range from 0 to 65535.
- 32 Bit B/W Float The output image is a monochrome image with real numbers as pixel
values.
- 2x32 Bit Com- The output image is a monochrome image with complex numbers
plex (real part and imaginary part) as pixel values. Such images are gener-
ally created by means of transformation into the Fourier space.
- 24 Bit RGB The output image is a color image, the whole-number color values of
which in the red, green, and blue channels can lie in the range from 0
to 255.
- 48 Bit RGB The output image is a color image, the whole-number color values of
which in the red, green, and blue channels can lie in the range from 0
to 65535.
- 2x32 Bit RGB The output image is a color image with real numbers as color values
Float in the red, green and blue channels.
- 3x64 Bit RGB The output image is a color image with complex numbers (real part
Complex and imaginary part) in the red, green and blue channels. Such images
are generally created by means of transformation into the Fourier
space.
With this method a color image can be generated out of three input images of the single color ex-
tractions Red, Green and Blue.
Parameters
Parameter Description
Output Pixel type Here you choose the desired output image format, e.g. 24 Bit RGB.
With this function you can convert Lambda stacks which were acquired with LSM 800 into a file
with the same appearance as inside the Lambda view. In contrast to the generic raw data format
of the Lambda stacks, these files can be opened and analyzed normally in ZEN (black edition).
This method copies the annotations of one image into another image.
This method automatically corrects the jitter of the stage which can occur during the acquisition
of a Z-stack image.
3.11.8.14.12 Correlation
With this function you can, in conjunction with confocal data sets, display the spatial or temporal
correlation of an image or image stack. You can select which kind of correlation you want to per-
form by activating the corresponding checkboxes.
Parameters
Parameter Description
Cross Correlation If activated, you can correlate two images with each other. Note that
the second input image needs to have the same dimensionality and
size.
Time Correlates the signal in time. Only available for time series data sets.
Parameters
Parameter Description
Pattern Select the desired pattern for the gray scale image here.
Parameter Description
- 2D Gray Scale The gray scale runs from top to bottom, starting with the gray value
Vertical selected in parameter Min. Gray Value.
- 2D Gray Scale The gray scale runs from left to right, starting with the gray value se-
Horizontal lected in parameter Min. Gray Value.
Width Set the desired width of the output image in pixels using the slider or
the input field.
Height Set the desired height of the output image in pixels using the slider or
the input field.
Min. Gray Value Set the minimum gray value of the gray scale using the slider or input
field.
Max. Gray Value Set the maximum gray value of the gray scale using the slider or input
field.
- 8 Bit B/W The output image is a monochrome image whose integer gray values
can be in the range of 0 to 255.
- 16 Bit B/W The output image is a monochrome image whose integer gray values
can be in the range of 0 to 65535.
- 24 Bit RGB The output image is a color image whose integer color values in the
channels Red, Green, Blue can be in the range of 0 to 255.
- 48 Bit RGB The output image is a color image, with integer color values in the
color channels Red, Green, Blue can be in the range of 0 to 65535.
This method allows you to extract parts from one image and use these to create a new image.
You can select these parts freely from the individual dimensions of the image. Each of the param-
eter sections is only visible if the corresponding dimension is present in the input image.
Info
Image Analysis Results
Note that if your image contains analysis results, the analysis results are deleted when you exe-
cute this function.
Parameter Description
Channels Selects which channels of the input image are used. All channels are
selected by default. To deselect a channel, click on the respective
channel button.
Z-Position, Time, Here you can select which parts of the input image you want to use
Block, Scene for the resulting image.
- Extract All If selected, all parts of the corresponding image are extracted.
Parameter Description
- Extract Range If selected, you can select a certain range of images to be extracted.
- Extract Multi- If selected, you can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section 10, followed by sections 14 to 18, as well as
sections 20 and 23.
- Get current Adopts the position from the current display in the image area.
position
- Interval Activated: Interval mode is active. The Interval spin box/input field
appears.
Enter the desired interval here. E.g. if you enter the value 2 only every
2nd value from the range is considered.
Region Here you can select if you want to use the entire image or just a re-
gion (ROI) of the input image.
- Full Select this option to use the full image for the new image.
- Rectangle re- Select this option to draw in a rectangle region of interest, which will
gion (ROI) be used for creating a new image.
If a rectangle region was drawn in, you can see and change its coordi-
nates by editing the X/Y/W/H input fields.
This method allows you to extract certain dimensions, e.g. channels, regions or time series from
one image and use these to create a new image.
Info
Each of the dimensions described below is only visible if the corresponding dimension is
present in the input image.
Method Parameters
Parameter Description
Split Dimension Depends on the loaded image.
Parameter Description
- None The image is not split by any dimension. Only the ranges of the differ-
ent dimensions defined below will be extracted for the new image.
- Channels (or: Here you can select the dimension for splitting the data set. A new
Time, Scenes image document opens in ZEN for each element of the selected di-
etc.) mension. The available options depend on the selected image. If your
input image contains two channels, split dimension creates two out-
put images, for each channel one.
Channels Here you can select which channels of the input image you want to
be used. All channels are selected by default. To deselect a channel,
click on the respective channel button.
- Extract All Activated: All elements of the corresponding dimension are ex-
tracted.
- Extract Range Activated: You can select a certain range of elements to be ex-
tracted.
- Extract Multi- Activated: You can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section10, followed by sections 14 to 18, as well as
sections 20 and 23.
Region
- Rectangle Re- Takes the rectangle into account that you can draw in the 2D view.
gion After drawing you can modify X, Y coordinates as well as width (W)
and height (H) manually.
Propagate ROI Has only an effect, if a region (ROI) is defined in multi-scene images.
Parameter Description
Activated: Applies the defined region (ROI) to all scenes.
For more information, see Creating Image Subset and Split Dimensions [} 81].
This method allows you to extract certain dimensions, e.g. channels, regions or time series from
one image and use these extracted dimensions to create a new image. The result file is saved in
your target folder.
Info
Each of the dimensions described below is only visible if the corresponding dimension is
present in the input image.
Method Parameters
Parameter Description
Split Dimension Depends on the loaded image.
- None The image is not split by any dimension. Only the ranges of the differ-
ent dimensions defined below will be extracted for the new image.
- Channels (or: Here you can select the dimension for splitting the data set. A new file
Time, Scenes, will be created in the target folder for each element of the selected di-
etc.) mension. The available options depend on the selected image. If your
input image contains two channels, split dimension creates two out-
put images for each channel.
Channels Here you can select which channels of the input image you want to
be used. All channels are selected by default. To deselect a channel,
click on the respective channel button.
- Extract All Activated: All elements of the corresponding dimension are ex-
tracted.
- Extract Range Activated: You can select a certain range of elements to be ex-
tracted.
- Extract Multi- Activated: You can select several continuous ranges and individual
ple sections.
Enter one or more sections that you want to select in the input field.
To do this, enter the first section, followed by a minus sign, and then
the last section. If you want to define an interval, after the last section
enter a colon and then the interval. The entry "2-10:2" means that ev-
ery second section is selected from section 2 to section 10.
Parameter Description
Enter a comma after the first section if you want to define another
section. You can also select individual sections separated by commas.
By entering "2-10:2,14-18,20,23", you select every second section
from section 2 to section10, followed by sections 14 to 18, as well as
sections 20 and 23.
Region
- Rectangle Re- Takes the rectangle into account that you can draw in the 2D view.
gion After drawing you can modify X, Y coordinates as well as width (W)
and height (H) manually.
Propagate ROI Has only an effect, if a region (ROI) is defined in multi-scene images.
Activated: Applies the defined region (ROI) to all scenes.
Target Folder Selects the folder on the disk where the images are to be saved.
Compression
- Original The output image has the same compression as the original image.
Defaults Sets the values back to default, if they have been changed.
For more information, see Creating Image Subset and Split Dimensions [} 81].
For creating experimental point spread functions from a Z-stack of subresolution fluorescent
beads please use the function PSF Wizard which is available together with the Deconvolution
module and offers a guided procedure starting with a stack of many beads and includes the Cre-
ate PSF functionality.
Prerequisite for the Create PSF function here is, that bead averaging has already been done. It is
available only for legacy reasons.
This function creates a PSF (Point Spread Function) image from a Z-stack image of a bead ac-
quired for PSF measurement. Please observe the instructions for optimal acquisition here: Using
beads for PSF measurement.
The result is a so-called PSF image. For advanced settings and options, please use the specific con-
trol elements on the PSF Display [} 1049] tab.
Parameters
Parameter Description
Z-Stack Correction Activated: Performs background correction of the Z-stack before the
processing.
Circular Average Activated: Forces a PSF with lateral symmetry. This option should not
usually be activated as lateral asymmetries correspond better to the
real situation. Circular averaging is only recommended when a mea-
sured PSF is used with the Fast Iterative method.
Threshold Crop- Activated: The PSF is restricted to gray value ranges up to 0.25% of
ping the brightest voxel present. If the value is reduced or the option is de-
activated, the PSF may be larger. This increases the calculation time.
However, it is also possible to achieve slightly better results in this
case. This option is deactivated by default.
Threshold By using this slider and input field, you can set the percentage from
which the PSF is clipped if the Volume Clipping option has been se-
lected.
Iterative Restora- Activated: If Z-stack images of beads with diameters greater than the
tion microscope's resolution limit are used to generate the PSF, this option
must be selected. The bead diameter used can be entered using the
slider and input field.
This method allows you to insert an image subset back into the original image. Its contents are re-
placed by the contents of the image subset. Using this method, you can process a previously cre-
ated image subset using image processing functions and copy the result back into the original im-
age.
Parameter Description
In Place Activated: The changes are applied to the original image and no new
image document is opened as output.
Deactivated: The way the changes are applied is defined by the Out-
put tool.
Subset Contains the description of how the input image was created as a
subset.
Shows which areas have been selected in generating the subset im-
age for each dimension (channels, Z-stack, time series), as well as for
the defined image section.
Example:
The entry "Z (1-8: 2) | T (2-7)" means that the sub-image consists of
sections 1,3,5,7 at the intervals of 2 to 7 of the input image.
See also
2 Output Tool [} 984]
This method allows you create an image pyramid and to create a pixel mask for valid pixels in
multi scene tile images, especially pyramid tiles. The pixel mask provides information (per sub-
block) for each pixel whether it contains real data or not. If pyramid tiles cover areas which do not
overlay with the acquisition tiles, the pixels of these areas are classified as invalid. The creation of
such a pixel mask can prevent potentially false results for operations done to/with the tile images.
As an example, visual artifacts when viewing the tiles could be reduced or eliminated (invalid pix-
els rendered transparent in the image view) and the calculation of the histogram could be im-
proved. The pyramid calculation never changes the values of the acquisition tiles, so raw data re-
mains untouched.
Option Description
Background Specifies which value is assigned to invalid pixels. Note that this back-
ground color is not visible in the viewer such as the 2D view, it is
merely a value for the invalid pixels.
– Auto Sets the value for the invalid pixels automatically based on the docu-
ment type, i.e. white for brightfield images and black for fluorescence
images.
Downsampling Fil- Selects a filter which is applied when generating the pyramid steps.
ter
– Blur Applies a 2x2 blur kernel before decimation (i.e. an average is calcu-
lated).
This method allows you to apply arithmetic operations to images in the form of a calculator.
You can process a single image or combine two images.
All operations are performed pixel by pixel.
Parameters
Parameter Description
Channel Input 1 Here you can select whether you want to use an individual channel or
all channels of the first input image for the calculation.
Parameter Description
Channel Input 2 Here you can select whether you want to use an individual channel or
all channels of the second input image for the calculation.
First Images Activated: For the second input image uses only the first time points
of a time lapse image for the calculation. This allows you, for exam-
ple, to normalize a time lapse image to the intensity values of the first
time points.
Enter the number of images that you want to be used for the calcula-
tion using the input field.
Formula Enter the calculation formula here using the keyboard and numeric
keypad. Use "S1" as a placeholder for the first input image and "S2"
for the second input image.
Input 1 Inserts the placeholder for the first input image into the Formula in-
put field at the current cursor position.
Input 2 Inserts the placeholder for the second input image into the Formula
input field at the current cursor position.
Absolute Intensi- Activate this radio button if input image 1 and input image 2 have the
ties same pixel type.
Normalize Intensi- Activate this radio button if input image 1 and input image 2 have dif-
ties 0..1 ferent pixel types. To allow such images to be combined, the intensity
values of the two images are normalized to the value range from 0 to
1 before the calculation.
Operators... Opens a list of all available operators. Here you can select the opera-
tor that you want. If you double-click on a list entry, it is inserted into
the Formula input field at the current cursor position.
This function creates a synthetic image where the dimensions can be defined.
Parameter Description
Width Width in x of the image.
Z Slices Number of z slices of the image. If the value is > 1, it will become a Z-
stack image.
Time Slices Number of time slices, if value is > 1, it will become a time series im-
age.
Parameter Description
Pixel Type Specifies the pixel type of the image.
Pattern
- Uniform All pixels of the image have identical Min. Gray Value.
- 2D Gray Scale The image shows a gray scale with values between Min. Gray Value
Vertical and Max. Gray Value from top to bottom.
- 2D Gray Scale The image shows a gray scale with values between Min. Gray Value
Horizontal and Max. Gray Value from left to right.
- Ramp The image shows a ramp with values between Min. Gray Value and
Max. Gray Value starting from each corner of the image to the cen-
ter.
- Gaussian The image shows a Gaussian shaped grayscale with values between
Min. Gray Value and Max. Gray Value starting from the borders of
the image to the center.
- Checkerboard The image shows a checkerboard where the “dark” fields have Min.
Gray Value and the “bright” fields have Max. Gray Value.
- Cosine Checker- The image shows a checkerboard where the “dark” fields have Min.
board Gray Value and the “bright” fields have Max. Gray Value overlaid
with a cosine modulation.
- Chirp Cosine The image shows a cosine pattern where the “dark” fields have Min.
Gray Value and the “bright” fields have Max. Gray Value overlaid
with a chirp modulation.
- Chirp Checker The image shows a checkerboard where the “dark” fields have Min.
Gray Value and the “bright” fields have Max. Gray Value overlaid
with a chirp modulation.
- Single Sphere a 3D (Z stack) image is created which contains a single sphere with
Sphere Diameter diameter which is positioned in the center of the
image.
Info
Linear Unmixing in Direct Processing
If you are using Linear Unmixing in Direct Processing, Automatic Component Extraction (ACE)
is not available as the image is not yet created when Direct Processing is set up. It offers only
the functionality to import and use reference spectra. You can import reference spectra (Im-
port from) and use the functionality to Calculate Residuals.
With this function you can extract the emission of single fluorescence dyes (e.g. GFP only, YFP
only etc.) from strongly overlapping multi-fluorescence data acquired in multi-channel images or
Lambda stacks (only available in LSM imaging mode). Note that the functions needs (at least) two
fluorescence channels in the input image.
With the knowledge of the spectral characteristic of individual dyes within a sample with multiple
dyes, even heavily overlapping individual dye spectra can be mathematically extracted. This
method is a pixel-by-pixel image analysis procedure. Ideally, fluorescence spectra of samples la-
beled with one dye only are acquired and stored in the spectra database as an external reference.
This can be done either by employing the spectral detector of a LSM system or by setting up a
multichannel experiment on filter based multichannel systems. Then a multi-channel image or
Lambda stack from the multi-labeled sample is acquired. The individual dye spectra are then
mathematically extracted using the information from the reference spectra. Up to ten different
reference signals can be used in the least-square-fit based algorithm to produce a 10-channel out-
put image without any partial overlap between the channels.
Avoid detector saturation of fluorescence signal in the data set to be unmixed. Saturation gener-
ates a high signal in the residual channel and will have a negative impact on the unmixing result.
If samples are not available labeled with individual dyes only, the references can be obtained by
the following methods:
§ Interactively by user-selection of regions in the image where only one fluorescence dye is
present (only available in the Unmixing view).
§ Automatically by Automatic Component Extraction (ACE). Here the software tries to identify
pixels in the acquired multichannel image whose intensity results from an individual dye only.
Note that ACE does not work in all cases and linear unmixing can then lead to wrong results. This
is especially the case when unmixing widefield multichannel fluorescence images, where there
might not be areas which have sufficiently pure single dye contribution. Here it is especially impor-
tant to acquire single-dye reference spectra first.
Parameter Description
Import Reference For the unmixing process previously generated emission spectra of
Spectra ideally pure dyes can be loaded and used for unmixing. This function
is mutually exclusive to the Automatic Component Extraction func-
tion.
– Import from Allows you to select and import reference spectra by clicking on
.
– Spectra List Displays the list of imported spectra with an ID, the File Name and
the Channel.
Parameter Description
Automatic Compo- Use this function if no reference spectra are available. Indicates the
nent Extraction number of components the system should be looking for in the im-
age. The number of components cannot be higher than the number
of channels. It will only work if each of the emission signals is present
in an area of the image without overlap of another emission signal.
Otherwise, ACE cannot produce a reliable result.
Weighted Unmix- If activated, spectral channels with high noise contribute less to the
ing unmixing result. This option includes a statistical analysis of the signal-
related (Poisson-) noise and weighs the respective contribution for the
fitting with the combination of reference spectra to the experimental
data.
Note: This option involves a more sophisticated unmixing algorithm
and therefore takes longer than the basic unmixing analysis.
Weighted unmixing generates improved unmixing results when acqui-
sition channels are not so well balanced but still have a good signal-
to-noise ratio.
Calculate Residuals Activated: Generates an additional channel in which the intensity val-
ues represent the difference between the acquired spectral data and
the fitted linear combination of the reference spectra. In essence, the
residual value is the biggest remaining "residual" from the least
square fit routine. The residuals are a general measure for how good
the fit of the algorithm has performed. The higher the intensity in this
additional channel, the worse is the fit of the spectra to the data set.
See also
2 Using Direct Processing [} 230]
2 General Settings [} 83]
This method generates the individual color extractions for red, green, and blue from the RGB in-
put image. The resulting images for red, green, and blue take the form of gray images.
Parameters
Parameter Description
Output Pixel type Here you choose the desired output image format, e.g. 8 Bit B/W.
This method saves the single blocks/dimensions (Tiles or Positions) of a multiblock image (i.e. im-
age of an inhomogeneous experiment) in a folder in .CZI format.
Parameter
Parameter Description
Split Mode Choose the mode how to split the multiblock image.
- Homogeneous Splits the multiblock image into the single dimensions. The blocks will
groups remain.
Display field The path of the destination folder is displayed automatically in the dis-
play field. To change the folder, click on the button to the right
of the display field.
This method separates scenes from a tiles or positions image. The individual images are displayed
in the Center Screen Area. Note that the images in this method, in contrast to the method Split
Scenes (Write Files), are not automatically stored in a folder.
This method saves the single scenes (tiles or positions) of a multi-scene image as single images in
a folder in CZI format.
Parameter
Parameter Description
Output Folder Displays and sets the path of the output folder. To change the folder,
click on the right of the input field.
Include Scene In- Activated: Includes the scene information in the file name of the sep-
formation in Gen- arate image.
erated File Name
Overwrite existing Activated: All files in the target folder are deleted if the function is
files applied again.
Parameter Description
Compression Selects the type of compression, e.g. Original (no compression) or
Compression (JPEG XR).
This method exports your Airyscan data in a 1ch Sheppard sum format. This export does not
change the Airyscan data format, but generates an additional data file with just one summed up
channel of the Airyscan. Since no filtering or deconvolution is performed, this data format is com-
patible with many third party or self programed deconvolution or machine learning super-resolu-
tion methods.
See also
2 Exporting Data in 1ch Sheppard Sum Format [} 721]
2 Airyscan RAW Data [} 721]
Parameter Description
Auto Filter Activated: A suitable Super Resolution parameter for the LSM Plus
processing is automatically determined for the selected data set.
Deactivated: Displays the Super Resolution slider and input field.
– Standard Uses the standard strength for the automatically determined value.
See also
2 Using Direct Processing [} 230]
4 Basic Functionality
The following functionalities are included in the base software:
This module enables you to execute modules from arivis Cloud. arivis Cloud is an online platform
to create customized workflows for image processing tasks of your microscopy images, see
https://www.arivis.cloud/home/. The functionality provides the execution of certain demo mod-
ules as well as individual arivis Cloud modules.
To be able to connect ZEN with arivis Cloud, you need an access token. Take the following steps
to create the token and enter it in ZEN.
Prerequisite ü Your PC has a connection to the internet and you are signed in on arivis Cloud.
ü You have started ZEN.
1. Click Tools > Options > arivis Cloud.
See also
2 arivis Cloud Tab [} 846]
Info
Ubuntu Distribution Required
If you want to execute your arivis cloud modules on a remote Linux machine, this remote ma-
chine needs to run an Ubuntu distribution.
If you want to execute the arivis Cloud modules on a remote Linux machine, you have to connect
ZEN to this computer.
See also
2 arivis Cloud Tab [} 846]
To execute an arivis Cloud module on your remote computer, you have to have a folder to which
both PCs have access to. You then have to tell ZEN which folder it is and what the corresponding
directory for this very same folder looks like for the remote computer. Since currently only Linux
machines can be used for the remote task, the file systems on both PCs are different and the
folder paths need to be mapped to one another.
Prerequisite ü You have set up a folder to which both PCs have access.
ü You are in Tools > Options > arivis Cloud.
ü In the Select Execution Mode dropdown, Use Remote Docker Host is selected.
See also
2 Edit Mapping Dialog [} 848]
Prerequisite ü You have done all the preliminary work and created an access token, see Creating and Enter-
ing an Access Token [} 206].
ü Your PC is connected to the internet and you have started ZEN.
ü Docker Desktop is installed and running on your machine, see Preliminary Work & Prerequi-
sites [} 206] and Requirements for Docker Desktop [} 1313].
1. On the Applications tab, open the Module Manager tool.
2. Click Download Modules from arivis Cloud.
See also
2 Module Manager Tool [} 211]
2 arivis Cloud (on-site) [} 206]
Prerequisite ü You have downloaded an arivis Cloud module, see Downloading an arivis Cloud Module
[} 208].
ü Docker Desktop is installed and running on your machine, see Preliminary Work & Prerequi-
sites [} 206] and Requirements for Docker Desktop [} 1313].
ü If you execute the module on a remote computer, both PCs have to be connected via network
and have access to the defined shared folder, see Setting Up the Remote Module Execution
[} 207].
1. On the Applications tab, open the Module Execution tool.
2. For module, select the module you want to use locally from the dropdown list.
3. If applicable, you can select which Version of the module you want to use.
à The Module Parameters of the selected module are displayed.
4. Under Module Parameters, set your parameters.
5. If your module supports multiple inputs and you want to use the module on more than one
image, activate the checkbox Use Batch Mode.
6. Under Module Input, click on and select the input image(s) which should be pro-
cessed by the module.
7. Select the Execution Settings Location where the module is executed on your computer
and your results are saved.
8. Click Execute Module.
à Your selected arivis Cloud module is executed.
9. To check the status and see the last executed modules, click Browse Results to open the
executions browser.
You have successfully executed an arivis Cloud module.
See also
2 Executions Browser [} 212]
2 Module Manager Tool [} 211]
2 Module Execution Tool [} 212]
This chapter shows you how to download an arivis Cloud module and use it locally. For the pur-
pose of this example, the Auto Thresholding module is downloaded and used locally.
ü You have created an access token and entered it in Tools > Options > arivis Cloud, see Cre-
ating and Entering an Access Token [} 206].
1. On the Applications tab, open the Module Manager tool.
2. Click Download Modules from arivis Cloud.
à A browser window opens.
à The parameter of the module Auto Thresholding are now displayed in the tool.
Parameter Description
Local Available Displays a list with all the downloaded and locally available arivis
Modules
Cloud modules. Click to display a short description of the respec-
tive module and to have the possibility to delete it by clicking .
Parameter Description
Download Mod- Opens a browser window to download arivis Cloud modules, see
ules from arivis Downloading an arivis Cloud Module [} 208].
Cloud
See also
2 arivis Cloud (on-site) [} 206]
Parameter Description
Module Selects the module you want to execute locally. Displays all down-
loaded modules in a dropdown list.
Version Sets which version of the currently selected module should be exe-
cuted.
Module Parameter Displays and sets all the parameters of the currently selected module.
Module Input Selects the image(s) for which the module is executed.
– Use Batch Only available for modules that support multiple input images.
Mode
Activated: Enables the selection of multiple input images.
Deactivated: The input can only select one image.
– Batch Input Only available for modules that support multiple input images and if
Use Batch Mode is activated.
Selects the input for Batch execution.
input_image
Selects the input with a click on .
Execution Settings Sets the path where the execution is located and where the results
Location are saved.
Stop Batch execu- Only available for modules that support multiple input images and if
tion on error Use Batch Mode is activated.
Activated: Stops the Batch execution if an error occurs.
See also
2 Executions Browser [} 212]
2 arivis Cloud (on-site) [} 206]
2 Executing arivis Cloud Modules in ZEN [} 209]
This browser displays the last executed arivis Cloud modules with detailed information.
Parameter Description
Filter
Parameter Description
– Client or User- Filters the executions that contain the characters entered in this field
name contains as part of the client and/or username.
– Status Filters the executions that have the status that is selected from this
dropdown list.
List of Executions
– Client/User- Displays the name of the client and user that started the execution.
name
– Open Results Opens a file browser at the location the results are saved.
Parameter Description
Module Name Displays the name of the executed module.
Starting Client Displays the name of the client that started the execution.
Starting User Displays the name of the user that started the execution.
Results Directory Displays the directory where the results are saved.
Status History Displays the status history with date and time for each status the exe-
cution had.
Open Results Opens a file browser at the location the results are saved.
4.2 Colocalization
This module enables you to quantify colocalization in two channels. The gray value pixel distribu-
tion is displayed in two channels with the help of a scatter plot with four quadrants. You can
draw multiple regions into the image. The data table shows the measured values dynamically for
both the entire image and the individual regions. Additionally, 17 measured values in the data ta-
ble can be exported for further analysis.
Info
The channels that you are comparing with one another are displayed in the image area in the
form of a color overlay. The channel color of the image is used here. If the images have more
than 2 channels, you can add additional channels on the Dimensions tab. This temporary se-
lection is deactivated, however, when you select the channels for comparison on the Coloc.
Tools tab.
In the Colocal. (Colocalization) view, you can analyze the extent of colocalization quantitatively
in two fluorescence channels. The view consists of two main areas: the X/Y scatter plot on the left
and the actual image (2 channels are displayed) in the right image area. Using the Coloc. Tools
tab, you can also display the colocalization table in the lower image area.
The analysis is performed on regions drawn into the image. Once a region is drawn, it is automati-
cally treated as an active region. The scatter plot shows the pixel value frequencies for this region.
The Colocalization table displays the data for the entire image and for the selected region. To
select several regions, press the Ctrl key while clicking on the desired regions.
Apart from drawing regions into the image, you can also draw them into the X/Y scatter plot. If
you have used the function in the regions section of the Coloc. Tools tab, only those pixels that
are framed by a region in the scatter plot are taken into consideration. This means that you can
correlate interesting point clouds quickly with the corresponding pixels in the image.
The pixel intensities of two channels are plotted against one another in the diagram and each
pixel pair with the same X/Y image coordinates is displayed as a point. The frequency with which
pixels of a certain brightness occur is visualized with a color palette that is displayed at the bot-
tom of the diagram. The relative value range is between 0-255.
The vertical and horizontal axes show the gray value range of the relevant channel.
The diagram is overlaid with two lines that subdivide it into four quadrants, numbered from 1 to
4. Using the mouse, you can position the lines freely and adjust the threshold values to the data.
The quadrants have the following meanings:
Here you find all control elements you need to perform a colocalization analysis.
Parameter Description
Tool Bar Only visible if Show All is activated.
Displays tools to draw regions for analysis into the image. For a de-
scription of the individual tools, see Graphics Tab [} 1036].
– ROI Only visible if you have drawn a region into the scatter plot.
As long as this button is activated (highlighted in blue), you can select,
move and change the regions in the scatter plot. If you want to
change the quadrant lines again, you need to deselect the button.
Threshold Sets the threshold value (in gray levels) for both channels using the
two Threshold sliders and the two spin boxes/input fields.
Parameter Description
Dimension Selec- Only visible if at least one of the Range dropdown lists is set to Auto
tion and if the image is a z-stack or a time series. Defines whether the axis
of the scatter plot is based on the values of the current image plane
(2D), or on the entire z-stack (All Z)/time series (All T). In this way
you can easily determine a valid diagram setting for an entire time se-
ries, for example, without having to analyze each individual time
point.
Regions
– Channel but- Here you can mask pixels in the image according to which one of the
tons four quadrants they belong to. The numbers on the buttons corre-
spond to the numbering of the quadrants in the X/Y scatter plot.
The color selection window is accessed by clicking on the color field.
Extract
– Scatter Plot Creates a new image document from the X/Y scatter plot. In the case
of time series or z-stacks, the dimensions are also created automati-
cally.
Only visible if the Table checkbox is activated on the Coloc. Tools tab.
For each quadrant of the scatter plot there is a corresponding row in the table. The table contains
columns for the different measured values with the Global row containing the values for the en-
tire image.
4.2.1.3.1 Region
Once a region has been selected it has a number assigned to it. This number appears in the image
and in the table.
4.2.1.3.2 Quadrant
Indicates the measured values for the four quadrants of the scatter plot.
Shows the total number of pixels of each quadrant. The sum of all pixels in this column for all 4
quadrants corresponds to the product of the height x width of the original image.
Insensitive to differences in the signal intensity between the two channels and bleaching.
Value range: 0 to 1
The calculation formula is as follows:
This coefficient indicates the relative number of colocalized pixels in channel 1 in relation to the
total number of pixels above the threshold value:
The values range between 0 and 1, with 0 indicating no colocalization and 1 indicating full colo-
calization.
Numerator = Number of pixels in quadrant 3
Denominator = Number of pixels in quadrant 3 + number of pixels in quadrant 1
This coefficient indicates the relative number of colocalized pixels in channel 2 in relation to the
total number of pixels above the threshold value:
The values range between 0 and 1, with 0 indicating no colocalization and 1 indicating full colo-
calization.
Numerator = Number of pixels in quadrant 3
Denominator = Number of pixels in quadrant 3 + number of pixels in quadrant 2
4.2.1.3.10 CC (Weighted) 1
Weighted correlation coefficient channel 1. Calculated like the simple colocalization coefficient,
but using the sum of the gray value intensity rather than the number of pixels.
The values range between 0 and 1, with 0 indicating no colocalization and 1 indicating full colo-
calization.
Numerator = Sum of intensity of all pixels in quadrant 3
Denominator = Sum of intensity of all pixels above the threshold value
4.2.1.3.11 CC (Weighted) 2
Weighted correlation coefficient channel 2. Calculated like the simple colocalization coefficient,
but using the sum of the gray value intensity rather than the number of pixels.
The values range between 0 and 1, with 0 indicating no colocalization and 1 indicating full colo-
calization.
Numerator = Sum of intensity of all pixels in quadrant 3
Denominator = Sum of intensity of all pixels above the threshold value
The sum of all gray values from channel 1, divided by the total number of pixels in this channel:
The sum of all gray values from channel 2, divided by the total number of pixels in this channel:
4.2.1.3.16 Z Index
4.2.1.3.17 T Index
Displays the time of acquisition for all dimensions of a multidimensional image, beginning at
0h:00min:00sec:00msec.
Displays the relative focus position at which an image has been acquired.
The development of microscopic techniques makes the data structure ever more complicated. In
many cases, the raw data is large, time consuming to process and not needed at the end of the
experiment. The module Direct Processing is designed to simplify the workflow to improve us-
ability and to parallelize the acquisition and processing steps to save time.
During image acquisition using an acquisition PC, Direct Processing enables the user to select
processing functions which are executed on a processing PC. Several different functions are avail-
able and you can define a sequence of functions (in a so-called pipeline), which are then executed
sequentially one after another. Direct Processing starts to process the smallest processable en-
tity as soon as its acquisition has been completed. In the case of deconvolution, this is typically a
z-stack for one channel.
Graphic cards
All released ZEISS workstations for use of Deconvolution are equipped with a sufficiently large
GPU. Deconvolution uses GPU acceleration, so a dedicated Nvidia CUDA based graphic card is op-
timal and recommended. ZEN has been tested with professional grade Nvidia Quadro GPU
(P6000, P4000, M6000, M4000 and RTX6000). The use of Nvidia Geforce grade GPU should also
be possible, but is not extensively tested. The other processing methods in Direct Processing, such
as Airyscan/Apotome processing and denoising use CPU and not GPU.
The drivers need to be compatible with Windows 10.
If processing and acquisition run on different computers and a second acquisition computer is
added to the network, then the ZEN version of the computers can be different. In this case, the
“older” ZEN version will not be able to use the added functionalities of the more recent ZEN ver-
sion on the processing machine. There is no “backward” compatibility implemented.
See also
2 Connecting Acquisition Computer and Processing Computer [} 222]
§ Before using the Direct Processing functionality on two computers, the acquisition and the
processing computers need to be connected, see Connecting Acquisition Computer and
Processing Computer [} 222].
§ On the acquisition computer, settings need to be defined, see Direct Processing Tool on Ac-
quisition Tab [} 233].
§ On the acquisition computer, settings in the Auto Save tool need to be defined, see Defining
Settings in the Auto Save Tool [} 229].
§ On the processing computer, the receiving needs to be activated, see Direct Processing Tool
on Applications Tab [} 237].
§ Set up and run an experiment with Direct Processing, see Using Direct Processing [} 230].
See also
2 Using Deconvolution in Direct Processing [} 604]
2 Using Direct Processing with Airyscan Processing [} 231]
Before you can use Direct Processing on two or more computers, you need to connect the com-
puters. For general information, see Direct Processing [} 219]. For this the network configuration
of your PCs is a very important aspect, as depending on the configuration, your PC might have
more than one (ethernet) network connection.
For the communication via network with or without a discovery proxy, see Connecting Comput-
ers Without Discovery Proxy [} 223] or Connecting Computers With Discovery Proxy [} 226] re-
spectively.
On the Processing PC
3. If you have several network connections, under IP Address (this PC), select the IP address
you want to use from the dropdown. For example if you want to use a direct connection
with a 10 GB cable, you can select the corresponding IP here. Make sure to enable the
same connection on the acquisition PC.
4. Note down the name and/or the IP address and port of the processing PC. You can find the
IP and port under IP Address (this PC) and the computer name if you activate Send
Hardware Information. Alternatively, ask your IT department for information on how to
find the IP address or name of the computer. Note: In some networks the computers might
get assigned a new IP address over time (e.g. each day), so using the computer name for
establishing communication would be advisable.
5. If you want to display information about the processing computer on your acquisition com-
puter, enter the information in the text box PC Description.
6. If you want to have the hardware information, activate Send Hardware Information, and
for displaying statistics about the average job time, activate Send Processing Statistics.
On the Acquisition PC
3. If you have several network connections and want to force the use of a specific one (e.g.
direct connection with a 10 GB cable), under IP Address (this PC), activate Select IP Ad-
dress and select the respective address from the dropdown.
4. Click OK to close the dialog.
à The Tools > Options dialog closes.
8. In the Custom Processing PC text fields, enter the name or the IP address of the process-
ing computer in the Host Name field and the Port of the processing computer.
9. Click Add.
à The processing computer is now added to the list.
10. Select your processing computer in the list and click Connect. Alternatively, activate Auto-
matically Select Processing PC and the processing PC is automatically selected for each
experiment based on the available PCs and their queue length (the PC with the shortest
queue is selected).
à You are now connected to the processing computer.
11. Click Close to exit the dialog.
Both computers are now connected and Direct Processing is set up. You can now configure and
run your experiment.
See also
2 Overview Direct Processing [} 222]
2 Using Deconvolution in Direct Processing [} 604]
2 Using Direct Processing with Airyscan Processing [} 231]
2 Connecting Acquisition Computer and Processing Computer [} 222]
On the Processing PC
3. Under IP Address (Discovery Proxy), enter the IP address and Port of the discovery
proxy.
4. If you have several network connections, under IP Address (this PC), select the IP address
you want to use from the dropdown. For example, if you want to use a direct connection
with a 10 GB cable, you can select the corresponding IP here. Make sure to enable the
same connection on the acquisition PC.
5. If you want to display information about the processing computer on your acquisition com-
puter, enter the information in the text box PC Description.
6. If you want to have the hardware information, activate Send Hardware Information, and
for displaying statistics about the average job time, activate Send Processing Statistics.
7. Click OK to close the dialog and save the settings.
à The Tools > Options dialog closes.
8. On the Applications tab, in the Direct Processing tool, click Start Receiving. Note that
this is usually already activated by default.
On the Acquisition PC
3. Under IP Address (Discovery Proxy), enter the address of the Discovery Proxy. Alterna-
tively, this can also be done or changed in the Connected Processing PCs dialog later.
4. If you have several network connections and want to force the use of a specific one (e.g.
direct connection with a 10 GB cable), under IP Address (this PC), activate Select IP Ad-
dress and select the respective address from the dropdown.
5. Click OK to close the dialog.
à The Tools > Options dialog closes.
9. Select your processing computer in the list and click Connect. Alternatively, activate Auto-
matically Select Processing PC and the processing PC is automatically selected for each
experiment based on the available PCs and their queue length (the PC with the shortest
queue is selected).
à You are now connected to the processing computer.
10. Click Close to exit the dialog.
Both computers are now connected and Direct Processing is set up. You can now configure and
run your experiment.
See also
2 Overview Direct Processing [} 222]
2 Using Deconvolution in Direct Processing [} 604]
2 Using Direct Processing with Airyscan Processing [} 231]
If you want to set up and use your computer as discovery proxy, take the following steps:
3. If your PC has several active network connections, under IP Address (this PC), select the IP
address you want to use for the discovery proxy setup in the dropdown.
4. Click Start.
5. Click OK to close the Tools > Options dialog.
This computer is now set up and used as discovery proxy with the displayed IP address.
Info
Auto Save and ZEN Connect
If you have opened a ZEN Connect Project, the folder in the Auto Save tool is automatically
set to the folder where the ZEN Connect project is saved. In this case you cannot change the
folder for Auto Save. Also note that with an active ZEN Connect project you cannot select the
upload to ZEN Data Storage.
Info
Using Direct Processing and ZEN Data Storage
If you are using the ZEN Data Storage server as the location for storing your data, all acquisi-
tion and processing computers involved must have access to the transfer share defined in ZEN
Data Storage.
Also note that the images are only uploaded to ZEN Data Storage after they are now longer
open in ZEN.
When using the Direct Processing functionality, perform the following steps on the acquisition
computer to define the settings in the Auto Save tool.
Prerequisite ü You have the Auto Save tool open on the Acquisition tab.
1. In the dropdown, select if you want to save the image(s) locally or upload them to ZEN
Data Storage.
2. In the Folder field, specify the local directory, where acquired images should be stored.
Make sure that it is a shared folder which can be accessed from both computers.
3. If you select Store in ZEN Data Storage Server, make sure that the processing computer
is also connected to the same server. Optionally, you can define if the image should directly
be shared with a particular collection.
4. If you want images to automatically be stored in a new subfolder named with the current
date, activate Automatic Sub-Folder.
5. In the Name field, specify the file base name for the acquired images.
6. Activate Close CZI Image After Acquisition if you want to release the image from the ac-
quisition computers memory.
See also
2 Uploading Images Automatically After Acquisition [} 272]
Prerequisite ü If you are using Direct Processing on different computers, you have connected acquisition and
processing computer, see Connecting Acquisition Computer and Processing Computer
[} 222].
ü To ensure that the processing computer reads incoming files and starts the processing, on the
Applications tab, in the Direct Processing tool, you have clicked Start Receiving. This is
usually active by default.
ü On the Acquisition tab, you have set up your experiment for image acquisition.
ü On the Acquisition tab, Direct Processing is activated. This activates the Auto Save tool as
well.
ü Depending on your settings, you have defined the folder where the acquired images are
stored in the Direct Processing or the Auto Save tool. Use a folder to which the processing
computer has access. For information about sharing a folder, see Sharing a Folder for Direct
Processing [} 238].
1. On the Acquisition tab, open the Direct Processing tool.
à If no Direct Processing settings were made before for the current experiment, a particular
processing function is already preselected depending on your microscope, channel set-
tings and licenses.
2. From the Processing Function drop-down list, select the processing function you want to
use.
à The parameters of the function are displayed and the name of the function is displayed in
the pipeline container.
3. Set all the parameters of the function for your experiment. For detailed information about
the parameters refer to the descriptions of the individual image processing function.
4. To add another function or several other ones, click Add Function.
à A new container is added in the pipeline.
5. Select the next pipeline container, select a processing function from the dropdown, and set
the parameters for each function.
à You have added and set up a sequence of processing functions.
6. Click Start Experiment to run the experiment. Note: You can pause the processing. If you
stop the experiment, requests that have been sent earlier by the acquisition computer are
not processed. However, already processed images will be retained.
à The images are stored in the folder you have defined in the Auto Save or Direct Pro-
cessing tool. When you abort the acquisition, the remote processing will not take place.
In case you have set up several processing functions, only the acquired image and the fi-
nal output image are stored.
à The processing computer reads incoming files and starts the processing. The path to the
selected folder, the currently processed image as well as the images to be processed are
displayed in the Direct Processing tool. The processed image is saved to the same
folder specified in the Direct Processing tool. If the image name already exists in this
folder, the new file is saved under a new name <oldName>-02.czi.
7. To cancel the processing on the processing computer, on the Applications tab, in the Di-
rect Processing tool, click Cancel Processing.
Once processing is finished, you are notified on the acquisition PC and can open and view the ac-
quired image as well as the processed image. This should be done on the processing computer, so
that you can immediately start a new experiment on the acquisition computer. However, you can
also automatically open the processed image on the acquisition PC with the respective setting in
the Direct Processing tool on the Acquisition tab.
Information about Direct Processing (e.g. the duration) is available on the Info view tab of the
processed image.
See also
2 Using Direct Processing with Airyscan Processing [} 231]
2 Using Deconvolution in Direct Processing [} 604]
2 Direct Processing Tool on Acquisition Tab [} 233]
2 Auto Save Tool [} 890]
Note: If you uncheck the Auto Filter checkbox and activate Adjust per Channel, you can
set the Super Resolution parameter in a channel specific manner.
3. Select the desired settings for Airyscan Processing. For details how to use this function,
see Airyscan Processing [} 186]. Ideally, you have already checked the best parameters be-
forehand, using a sample image acquired under the same conditions as set up for the ex-
periment.
4. Click Start Experiment to run the experiment.
Note: You can pause the processing. If you stop the experiment, requests that have been
sent earlier by the acquisition computer are not processed. However, already processed im-
ages will be retained.
à The images are stored in the folder you have defined in the Auto Save or Direct Pro-
cessing tool. When you abort the acquisition, the remote Airyscan processing will not
take place.
à The processing computer reads incoming files and starts the Airyscan processing. The
path to the selected folder, the currently processed image as well as the images to be
processed are displayed in the Direct Processing tool. The processed image is saved to
the same folder specified in the Direct Processing tool. If the image name already exists
in this folder, the new file is saved under a new name <oldName>-02.czi.
5. To cancel the processing, click Cancel Processing.
Once processing is finished, you are notified on the acquisition PC and can open and view the ac-
quired image as well as the processed images. This should be done on the processing computer
so that you immediately can start a new experiment on the acquisition computer. However, you
can also automatically open the processed image on the acquisition PC with the respective setting
on the Direct Processing Tool on Acquisition Tab [} 233].
When you open the image in the Image View, information about the executed Airyscan process-
ing is available on the Info view tab. Additionally, general information about Direct Processing
(e.g. the duration) is also available on the Info view tab of the processed image.
See also
2 Auto Save Tool [} 890]
Parameter Description
Add Function Adds another processing function to the pipeline.
Remove Function Removes the currently selected processing function from the pipeline.
Processing Func- Selects the processing function you want to use for Direct Process-
tion ing for the currently selected container of the pipeline. Depending on
your experiment, microscope, channel settings and licenses, a particu-
lar processing function is already preselected by default.
Parameter Section In this section you have different parameters, depending on the se-
lected Processing Function. For more information, see:
§ Airyscan Joint Deconvolution Parameters [} 93]
§ Airyscan Processing Parameters [} 235]
§ LSM PLus Processing Parameters [} 205]
§ ApoTome RAW Convert Parameters [} 188]
§ Deblurring Parameters [} 94]
§ Deconvolution Parameters [} 236]
§ Denoise Parameters [} 112]
§ Extended Depth of Focus Parameters [} 175]
§ Intellesis Denoising Parameters [} 517]
§ Linear Unmixing Parameters [} 202]
§ Stitching Parameters [} 148]
§ Unsharp Mask Parameters [} 177]
§ Lattice Lightsheet Parameters [} 1198]
§ SIM Processing Parameters [} 1305]
§ Apotome Plus Parameters [} 608]
Use Advanced Set- Only visible if you have selected Deconvolution or Apotome Plus.
tings Activated: Enables the dropdown menu to load an advanced setting.
Load Setting cre- Only available if Use Advanced Settings is activated and Deconvo-
ated in the Decon- lution is selected.
volution function Select advanced settings which were created in the image processing
function Deconvolution (adjustable).
Note: Currently Direct Processing only supports settings configured in
Parameter Description
the Deconvolution tab of the function, some settings on the PSF tab
such as spherical aberrations correction cannot be used in Direct Pro-
cessing.
Load Setting cre- Only available if Use Advanced Settings is activated and Apotome
ated in the Apo- Plus is selected.
tome Plus function Select advanced settings which were created in the image processing
function Apotome Plus (adjustable).
Note: Currently Direct Processing only supports settings configured in
the Apotome Plus tab of function, some settings on the PSF tab
cannot be used in Direct Processing.
- Output Folder Displays the path where the processed image is saved. Make sure that
a shared folder is selected where both the acquisition and the pro-
cessing computers have access to.
- Use Output Activated: Synchronizes the output folder for Direct Processing with
Folder from the folder in the Auto Save tool. The Output Folder can only be
Auto-Save changed by editing the Folder in the Auto Save tool respectively. If
you have opened a ZEN Connect Project, the folder in the Auto Save
tool is automatically set to the folder where the ZEN Connect project
is saved.
- Create Sub- Activated: Creates sub-folders in the output folder. All processed im-
folders Auto- ages end up in a new folder named with the current date.
matically
- Define File Defines the filename of the processed image. Default: processed.czi.
Naming
- Use Original Activated: Uses the name of the file as it was acquired and adds the
Name with defined file naming as a suffix.
Suffix
- File Name Pre- Displays a preview of the name of the processed acquisition image.
view
- Remove Ac- Activated: Keeps only the processed image and deletes the acquisi-
quired Image tion image after processing.
after Process- Deactivated: Keeps both the acquired and processed image.
ing
- Only Open Fi- The processed image is displayed after processing is completed.
nal Image
Parameter Description
- This PC Selects the option to process the data on this PC, i.e. the PC used for
the acquisition.
- Edit Connec- Opens the dialog which lists all the connected PCs that are ready to
tion receive remote processing requests.
In case of file based communication, it checks if there is a processing
PC listening to the communication path. A message informs you
about the result.
See also
2 Connecting Acquisition Computer and Processing Computer [} 222]
2 Using Direct Processing [} 230]
2 Using Deconvolution in Direct Processing [} 604]
2 Using Apotome Plus in Direct Processing [} 617]
2 Using Direct Processing with Airyscan Processing [} 231]
This set of parameters is only visible if Airyscan Processing is selected as Processing Function.
For the information, you can also see Airyscan Processing [} 186].
Parameter Description
3D Processing Only available for images with 5 or more z-positions.
If activated, this option improves the resolution in axial and lateral di-
rection. The data set needs to have at least 5 z-sections acquired with
an optimal step size. 3D Processing is slower than 2D Processing. For
3D Processing, the whole z-stack (single channel and time point)
needs to fit into the physical memory.
Auto Filter If activated, a suitable Super Resolution parameter for the Airyscan
processing is automatically determined for the selected data set. To
manually adjust the Super Resolution parameter, deactivate the
checkbox.
Strength Use this option for an increased (high) or decreased (low) strength of
the automatically assigned filter value. This is especially useful for 3D
processing, as the 2D preview of the processing filter value in the
Airyscan viewer does not allow to conclude the result after a 3D data
processing.
Parameter Description
The increment of this parameter is ± 0.4 compared to the standard
auto Airyscan processing. This setting is not available when manual
processing strength is selected.
Adjust per Channel Only visible if Auto Filter is deactivated and only for images with two
or more Airyscan channels.
If activated, you can manually set channel specific Airyscan processing
parameters.
See also
2 Using Direct Processing [} 230]
2 Using Direct Processing with Airyscan Processing [} 231]
Parameter Description
- Simple, very Only available if Use advanced settings is deactivated. The Simple,
fast (Nearest very fast method is only available if you have configured an experi-
Neighbor) ment with a z-stack acquisition.
- Better, fast In Direct Processing, the parameters for Deconvolution are based on
(Regularized Deconvolution (adjustable). The Normalization is set to Clip (with
Inverse Filter) a Factor of 1) in case of remote processing because only with Clip
the output images contain brightness values which allow quantitative
- Good, medium comparisons.
speed (Fast It-
For Direct Processing, you cannot change any other values.
erative)
We recommend Excellent, slow (Constraint Iterative). Note that
- Excellent, this method is only available if you have the license for the Deconvo-
slow (Con- lution module.
straint Itera-
tive)
See also
2 Creating Deconvolution Settings [} 601]
2 Using Deconvolution in Direct Processing [} 604]
2 Deconvolution (Defaults) [} 104]
2 Deconvolution (adjustable) [} 95]
Parameter Description
Use Discovery Activated: Uses a Discovery Proxy server for the communication be-
Proxy tween the computers. This control is synchronized with the respective
options in the Tools > Options > Direct Processing dialog.
– Host Name Only available if Use Discovery Proxy is activated. Displays and edits
the name/IP address of the discovery proxy.
Available Process- Displays a list with all the available processing PCs. It provides the
ing PCs name, an overview of how many jobs are currently in the Queue.
Your current PC is automatically listed if you have clicked Start Re-
ceiving in the Direct Processing tool on the Applications tab.
– Only available for PCs that are added as Custom Processing PC.
Deletes the custom PC from the list.
Delete
– Host Name Sets the name/IP address of the respective processing PC.
Info
Receiving by Default
By default, the computer is starting the receiving of processing requests automatically on ap-
plication startup, i.e. Start Receiving is already active. If necessary, this behavior can be
changed under Tools > Options > Direct Processing > Setup Processing PC.
Parameter Description
Start/Stop Receiv- Starts or stops the reception of processing requests. The processing
ing computer waits for processing requests from the acquisition com-
puter. The computer is receiving by default.
Parameter Description
Note: If you click Stop Receiving while an experiment is running, the
experiment is continued and processed. Only after finishing the cur-
rently running experiment, a new experiment is processed.
Listening on Com- Shows the path where the computer is listening for processing re-
munication Path quests.
Current Request Displays which image is currently being processed. A progress bar in-
dicates how close you are to completing the currently processed im-
age.
Items in the Queue Displays the number of images to be processed. Note that due to the
integrative nature of the CZI images, individual scenes will not show
up as individual steps in the queue. Only when separated CZI docu-
ments are being produced, will the queue show a count > 0.
Cancel Processing Cancels the processing of the images in the output folder.
See also
2 Direct Processing [} 219]
This step by step guide gives instructions on how to create a shared folder for the Direct Process-
ing functionality in ZEN.
You need an imaging system with its workstation (called acquisition computer) and for process-
ing you need a high-end workstation (called processing computer) with a P6000 card (M/P4000
or M6000 works too) running the newest NVIDIA driver. Both computers need the software ver-
sion ZEN 3.0 or higher.
Computer connection
The processing computer must be connected to the acquisition computer via ethernet connection.
It works best to have both workstations equipped with a 10 GB ethernet card. A 1GB Ethernet is
fine as well, just make sure to not create very large images and deactivate Follow Acquisition in
ZEN when acquiring larger images. For details on how to connect your computer in a network,
ask your local IT department. For some basic information, see also Connecting Computers via
Cable [} 244].
IP Addresses
This is not strictly necessary, but knowing the IP Addresses is the safest way to troubleshoot net-
working issues, so we recommend to check this in any case. It is easiest to network the comput-
ers if both have the same login credentials, e.g. Username = ZEISS, password = zeiss. The comput-
ers can only network with each other if a password is set for both logins. For a description of how
to look up the IP address of your PC, see Looking Up the IP Address of Your Computer [} 243].
Also note down the computer names. You can find those when you right click This PC in the ex-
plorer and select Properties.
You have to make your processing PC discoverable in your network and create a shared folder.
13. If you want to change which accounts you can add, click Object Types to open a dialog to
select different types.
14. If you want to change the (network) location, click Locations to open a dialog to select the
location.
15. In the text field, enter the user or group name you want to give access to and click Check
Names.
à If the name exists, it is verified. In case there is no (exact) match, or in case there are mul-
tiple matches, a dialog opens where you can refine your search or select the right user/
group.
16. Click OK.
à The Select Users, Computers, Service Accounts, or Groups dialog closes and the
user/group is added to the list on the Permissions dialog.
17. Select the user(s)/group(s) and activate the checkboxes for the permissions you want to give
them to this folder.
18. Click OK to confirm all changes in all dialogs.
You have created a shared folder on the processing computer. You can now setup the access to
the folder on the acquisition computer.
On the acquisition computer, you need to map the folder from the processing computer to a drive
letter.
Note: You could also do this by directly accessing the folder via the IP address, however, it is bet-
ter to use the Windows mapping function as this enables some caching mechanisms which make
the connection more reliable.
7. Click OK.
8. Click Finish.
à The folder is mapped to your acquisition PC. The result will look something like this:
You are now ready to setup the connection of the two computers in ZEN. Refer to the respective
chapter in the software manual or the help in the software, which is accessible by pressing the F1
key.
We recommend connecting the PCs in a network by using i.e. a rooter. In case a rooter cannot be
used and two PCs are connected via a cable, consider the following remarks:
Extended Focus Processing enables you to generate images with no limitation of depth of field. It
supports the extraction of the sharp details from individual images at various focus positions and
the combination into an image with high depth of field. You can process z-stacks that have al-
ready been acquired and a wavelet algorithm allows the use in transmitted light, reflected light
and fluorescence imaging.
The functionality Manual Extended Focus allows you to create one single image out of several
image acquisitions with different focus positions. The sharp areas of all acquisitions are combined
to one consistently sharp image, the so called EDF image (EDF = Extended Depth of Focus). To
work with the Manual Extended Depth of Focus functionality, it has to be activated in Tools >
Modules Manager > Manual Extended Focus. The tool Manual Extended Depth of Focus is
displayed on the Locate tab.
See also
2 Comparing Images Using Split Display [} 246]
In this mode, a new image is automatically added after a defined interval. In the time period be-
tween the acquisitions you can set a different focus position. After each recording a new image
with extended depth of focus is calculated immediately. After finishing the acquisition, the final
image is the one that was calculated last.
Prerequisite ü You have opened the Manual Extended Depth of Focus tool on the Locate tab.
1. If you want to store all acquired images in memory for a complete recalculation of the EDF
image after each single acquisition, activate Z-Stack. Note that the more images you ac-
quire, the more time it takes to calculate the EDF.
2. Click Timer.
3. Adjust the length of your interval with the Interval slider or input field. Set the interval long
enough so that you can comfortably move the specimen to a new focus position. After the
time elapses, an image is automatically acquired at your current position.
4. Click Start to acquire the images.
à The Central Screen Area is split into two parts. On the left you can see the current live
image, on the right you see the currently calculated extended focus image.
5. Move to a new focus position after each acquisition.
6. Repeat these steps until you have sharp images from all desired areas of your sample.
7. Click Stop to finish your acquisition. If you want to pause the acquisition, click Pause.
You have finished the acquisition of the manual EDF image. The resulting image is displayed in the
Central Screen Area.
See also
2 Acquiring EDF Images with F12 Key [} 245]
2 Comparing Images Using Split Display [} 246]
In this mode you can manually record a new image by pressing the F12 key on your keyboard.
With this you can achieve different time intervals between the individual acquisitions when you
change the focus position.
Prerequisite ü You have opened the Manual Extended Depth of Focus tool on the Locate tab.
1. If you want to store all acquired images in memory for a complete recalculation of the EDF
image after each single acquisition, activate Z-Stack. Note that the more images you ac-
quire, the more time it takes to calculate the EDF.
2. Click F12 Key.
3. Click Start to acquire the images.
à The Central Screen Area is split into two parts. On the left you can see the current live
image.
4. Press F12 on your keyboard to acquire an image manually.
à The calculated extended focus image on the right side is updated.
5. Move to a new focus position.
6. Repeat the last two steps until you have sharp images from all desired areas of your sam-
ple.
7. Click Stop to finish your acquisition. If you want to pause the acquisition, click Pause.
You have finished the acquisition of the manual EDF image. The resulting image is displayed in the
Central Screen Area.
See also
2 Acquiring EDF Images with Timer [} 245]
2 Comparing Images Using Split Display [} 246]
Prerequisite ü You have already acquired images with an extended depth of focus which you want to com-
pare to each other.
The comparison of all four methods (Timer, F12-key, Z-stack without shift, Z-stack with
shift) shows identical resulting images.
See also
2 Acquiring EDF Images with Timer [} 245]
2 Acquiring EDF Images with F12 Key [} 245]
Parameter Description
Z-Stack Activated: Stores all acquired images in memory and calculates the
EDF image based on all images after each individual acquisition. Note
that the more images you acquire, the more time it takes to calculate
the EDF.
Deactivated: Calculates the EDF image based on the last acquired
image and the previously calculated EDF image. Thus this mode is
only additive and does not store all acquired individual images nor
does it recalculate the entire image stack.
Quality Selects the quality level that you want the function to work with.
Registration Selects the method (or combination of methods) that is used to align
Method the images.
- Translation The neighboring sections of the z-stack image are shifted in relation
to each other in x and y direction.
- Rotation The neighboring sections of the z-stack image are rotated in relation
to each other.
- Skew Scaling The neighboring sections of the z-stack image are corrected for skew-
ness/shearing.
Parameter Description
- Affine The neighboring sections of the z-stack image are shifted in x and y
direction, rotated and the magnification is adjusted from section to
section.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Mode
- F12 Key Acquires an EDF image when you press F12 key.
See also
2 Acquiring EDF Images with Timer [} 245]
2 Acquiring EDF Images with F12 Key [} 245]
4.6 Measurement
This module enables you to measure morphological parameters of interactively defined contours,
like area, orientation angle, perimeter, diameter, center of gravity, radius of circle with equal area,
shape factor, bounding box, projections, etc. The free configuration of all interactive measure-
ment tools displays desired parameters in tables, lists or graphs. You can measure intensity values
for rectangles and contours and have the option for interactive measurement in online images.
Prerequisite ü You have opened the image where you want to add measurements.
1. Go to the Analysis tab and open the Interactive Measurement tool.
2. In the Feature Subset section, click Define.
à A dialog opens.
3. Activate all the measurement features you want to be able to use for interactive measure-
ments.
4. Click OK.
à The dialog closes and the selected features are available for further definitions.
5. In the Feature Set section, click Define.
à A dialog opens.
6. In the Available Elements list, select the graphical element for which you want to define
measurement features.
à The already defined features for the respective element are displayed in the Selected
Features list in the middle.
7. To add an additional feature to the currently selected graphical element, double click the
feature in the list on the right, or select it and click .
à The feature is added to the element and is displayed in the Selected Features list.
8. Repeat the previous steps to define the features for all graphical elements you need.
9. Click OK.
à The changes are saved and the dialog for feature selection closes.
10. In the Measurement Sequence section, click Define.
à A dialog opens.
11. In the Available Elements list, double click the elements you want to add to the measure-
ment sequence.
à The elements are displayed in the Selected Elements Sequence list in the middle.
12. Optionally, you can also add additional features to the currently selected element of the se-
quence by double clicking the feature in the list on the right.
13. Click OK.
à The changes are saved and the dialog for sequence definition closes.
14. Click Run.
à The dialog for interactive measurement execution opens.
15. Click Start.
à You enter the drawing mode.
16. Go to the image and draw the graphical element.
à The first element of the sequence is drawn into the image and added to the table on the
left. The values are displayed in the image according to your previous settings.
à The second element of the sequence is automatically selected for drawing.
17. Continue drawing all elements of the sequence into the image.
18. Click Stop.
à You exit the drawing mode.
19. Click OK.
à The dialog closes and the image with measurements is displayed in the 2D view of ZEN.
Parameter Description
Feature Set
– Feature Set Selects and loads previously saved feature definitions/feature sets. If
Dropdown you have made changes to a feature definition, the name of the fea-
ture selection is marked with an asterisk (*). If you close the applica-
tion without saving a changed feature selection, you will be asked
whether you want to save the changes.
Parameter Description
– Opens the options menu to create, import, export, save or delete a
feature set definition.
Options
– Define Opens the Feature Selection dialog to define the features that are
available for interactive measurements.
– Feature Sub- Selects and loads previously saved definitions of subsets. If you have
set Dropdown made changes to a subset definition, the name of the feature subset
is marked with an asterisk (*). If you close the application without sav-
ing a changed feature subset, you will be asked whether you want to
save the changes.
– Define Opens the Feature Subset Definition dialog to define which fea-
tures are available for the definition of the feature set.
– Define Opens the dialog to define the sequence of measurements that you
want to execute interactively.
Create Measure- Creates a measurement data table and opens it as a separate docu-
ment Table ment. This contains the measurement data from the Measure view of
the current image.
See also
2 Using Interactive Measurements [} 248]
In this dialog you can specify which features are measured with the available graphical elements.
Line Displays graphical elements you can use to measure a single distance.
Lines Displays graphical elements you can use to measure several distances
at once.
Points Displays graphical elements you can use to count various events in an
image.
Parameter Description
Name Displays the name of the feature.
Features Section
This section displays a list with all the features that are available for measuring the selected graph-
ical element. For a description of individual measurement features, see Measurement Features
[} 440].
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
Parameter Description
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
In this dialog you can specify which features are available in the Feature Selection dialog by acti-
vating the checkbox in front of the features. A right click menu offers the possibility to select or
unselect all features.
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
Parameter Description
- Polygon-based All features polygon-based features are listed.
Features
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
In this dialog you can define an interactive measurement procedure. You can specify the order in
which the individual graphic elements should be drawn in and which measurement parameters
you want to have calculated for them.
Line Displays graphical elements you can use to measure a single distance.
Lines Displays graphical elements you can use to measure several distances
at once.
Points Displays graphical elements you can use to count various events in an
image.
Double click on an element to select it and add it to the Selected Elements Sequence list.
Parameter Description
Deletes the selected feature.
Delete
Features Section
This section displays a list with all the features that you can measure with the graphical element
activated in the Available Elements section. For a description of individual measurement fea-
tures, see Measurement Features [} 440].
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
With this dialog you can execute your previously defined sequence for interactive measurement.
You can draw the graphical elements into the image. You also have the standard controls of the
Dimensions and Display tabs to adapt the image display and change to other image dimensions
to draw the elements.
The software offers the possibility to use a magnetic cursor in your image. This cursor detects
edges/contrast changes and automatically moves to them, which can help you to add measure-
ments or annotations. Note that this cursor only works reliably on one channel, so for multi-chan-
nel images only activate one channel in the view options. The magnetic cursor can be activated
and deactivated via the right-click context menu of your image, or with the shortcut Alt + C.
Parameter Description
Operation
Parameter Description
– Start Starts the drawing mode to add the sequence of graphical elements
to the image.
– Pause/Con- Pauses the drawing. This allows you to modify graphical elements
tinue that have already been drawn. The button changes to Continue for
resuming the drawing of the elements.
Measurement Se- Displays the graphical elements of the current measurement sequence
quence in the predefined order.
Measurement Data Displays the values measured with the graphical elements. A right
Table click in the header of a column opens a context menu for the respec-
tive column.
– Sort Data Opens a dialog to define how to sort the data in the table.
– Filter Data Displays a field to define criteria for filtering the data in the table.
– Name Activated: Displays the column with the name of the graphical ele-
ments.
– Feature Activated: Displays the column with the name of the measurement
feature used for the respective element.
– Unit Activated: Displays the column with the unit of the measurement.
Cancel Cancels the interactive measurement without saving the graphical ele-
ments.
4.7 Panorama
This module enables you to create overview images of large areas of your sample.
Prerequisites
For the interactive panorama acquisition following prerequisites are necessary:
§ All available microscope components in the MTB (MicroToolBox) have to be defined correctly.
§ The Panorama module is licensed and activated under Tools > Modules Manager.
4.7.1 Prerequisites
à Now you see the cameras live image of your sample in the Center Screen Area.
3. Click on the Set Exposure button.
à The exposure time will be calculated automatically.
4. Alternatively you can set the camera parameters manually in the Camera tool.
5. Focus on your sample now.
You have completed the prerequisites for a panorama experiment.
Before starting the experiment itself, we recommend tthat you first take a reference image for the
shading correction. This image will be used later for processing the panorama image.
4. In the Imaging Setup tool, you can check the settings before/after the experiment.
à The Advanced Imaging Setup is only displayed if you have activated the Enable Ad-
vanced Imaging Setup checkbox under Tools > Options > Acquisition > Acquisi-
tion Tab > Enable Imaging Setup.
5. In the Acquisition Mode tool, click the Get button to transfer the active camera settings
into the experiment.
As an alternative you can define your experiment settings here as well.
6. In the Panorama tool you can adjust several options for automatic or manual stitching if
desired.
7. Finally save your experiment with a suitable name in the Experiment Manager.
Info
4 After the image acquisition you can adapt the size of image and surrounding area with the
zoom keys F7 and F8 to your needs.
4 Keep a sufficient overlap area of the live image with the stored image.
à You will see the reduced display of the start image in the Center Screen Area.
à The displayed image is a live image. You can still change position and focus.
2. Click on the Acquire Tile Image button in the Center Screen Area to acquire the first tile
image.
à The image will be acquired and stored. The live image is still active as an overlay to the
stored image.
3. Move the blue frame with the active live image in the desired direction aside the stored im-
age by using the mouse.
4. Now move the sample to the corresponding neighbor position using the microscope stage.
à Try to position the structures in the overlap region as good as possible.
5. Click again on the Acquire Tile Image button to acquire this tile to the image.
7. Continue these steps until you acquired the desired panorama image of your sample.
8. After acquiring the last tile image click on the Stop button to close the live mode.
9. Finally end the experiment via the Stop button on the Acquisition tab.
à As a result you now see the recorded panorama image in a new image container.
The field of view of your microscope might be too small for the sample area you wish to acquire.
You can automatically visualize panorama images from a sample area which is larger than the
camera sensor can cover by means of a single snap.
With the Live Panorama tool, you can move the stage while the software automatically acquires
individual images, stitches them together and creates a panorama image.
Note that Live Panorama works for un-coded and un-motorized stages as well as motorized
stages.
Prerequisite ü You have set-up and configured your microscope system correctly.
ü You work with brightfield or widefield illumination.
ü Your image has sufficient contrast. Lower magnification objectives typically give better results.
ü A sample is on the stage and stays in focus. Note that you can adjust the focus during the
Live panorama.
ü You have started the software and selected the Locate tab. If you work with an LSM, activate
camera mode.
1. On the Locate tab, click the Live button to get a live image from the microscope camera.
Adjust the camera and microscope settings to see a well illuminated and sharp live image.
2. Navigate to a specific area on your sample you want to image.
à Move the stage gently and not too fast!
3. Select the Live Panorama tool, and click the Start Live Panorama button.
à After a short moment, the camera rectangle changes to green and you can move the
stage. The panorama acquisition starts. You see the live image of the sensor area.
à Note that the color of the rectangle changes to orange or red, when the software loses
the stitching algorithm. This might happen if the stage is moved too quickly. Then you
have to manually go back to the last known or successfully synchronized position. If you
wish to image a continuous area of the sample without any gaps, we recommend using
a zig-zag pattern to move over your sample to slowly build up the image.
The panorama image is added to the Documents tab > Images and Documents tool.
Stitching artifacts can be corrected by making sure the camera rotation has been corrected. You
can make use of the stitching image processing function to correct them.
You can save the image to your file system.
See also
2 Live Panorama Tool [} 269]
The next chapters will show you how to process panorama images with the Stitching processing
function. Using this method you can correct an offset between the tile images. We will show you
the different settings and make a comparison of the output images. So you can see which settings
will give you the best result.
Prerequisites
1. Open the Method tool and select in the group Geometric the Stitching function.
The following instructions are all based on this selection and show the different settings and re-
sults of this function.
You have successfully used the Stitching function. As you still can see shadows and edges in the
output image, we will show you how to use the function receiving better results in the following
chapters.
If your tile images contain a certain background shading, you can correct this if you have acquired
a reference image for the shading correction. This image has to be opened in the Center Screen
Area.
1. Select in the tile image for the stitching in the Input tool as first input.
2. In the Parameters tool select the New Output button.
à This will keep the original image and create a new output image.
3. Activate the Correct Shading checkbox.
4. Select the Reference entry from the dropdown list.
à This will let you select your reference image which is opened in Center Screen Area.
5. Now as a second input image select the reference image for the shading correction in the
Input tool.
6. Click on the Apply button to start the processing.
As a result you will get a stitched panorama image without any shading influences. The next
chapter will show you how to get rid of the edges which are still visible between the tiles.
With very low shading content in the image you have an alternative method to homogenize the
image transitions between the single tiles.
1. Under Image Parameters in the Input tool select the tile image for the stitching.
2. In the Parameters tool, click on the New Output button.
For extreme cases you have the possibility to combine both transition corrections.
3. Under Image Parameters in the Input tool select the tile image for the stitching and the
reference image for the shading correction.
You can create a Multi Image to compare the different results of your processed images via the
Splitter-Mode.
1. To compare different images, you can select the Split Display via the Create New Multi
Image button.
2. On the Split Display tab you can define how many images shall be displayed in X- and Y-
direction aside each other and how they shall be synchronized, e.g. 2 Columns and 2
Rows.
3. Move each of the different panorama images via drag&drop from the Right Tool Area >
Images and Documents gallery to an empty frame in Center Screen Area.
In our example we show the transition areas of the tile image as raw image (top left), as fused
tile image (top right), as shading corrected tile image (bottom left) and finally the combination
of shading correction and fused tiles (bottom right). In this last image no transitions between
the tiles are visible anymore.
See also
2 Panorama View [} 268]
In this view you see the representation of the microscope stage. The Live image from the camera
(blue frame) is automatically shown in the middle of the image area. Furthermore a tool window
is displayed, that allows you to control the image acquisition, e.g. perform auto exposure or ac-
quire an individual tile image.
See also
2 General View Options [} 1029]
In the image area the full travel range of the microscope stage is displayed. You can control the
stage view using the arrow icons at the edges of the image area. The view can be enlarged, re-
duced or moved using the general control elements.
Navigator frame
The current stage position is shown as a tile outlined in blue, the Navigator frame. In the Naviga-
tor frame you can see the camera's live image.
To move the frame, double-click on the position on the microscope stage to which you want to
move it.
To acquire images, use the Acquisition buttons in the Tools window.
See also
2 Tools Window [} 268]
The tool window for panorama view is normally visible in the lower right corner of the center
screen area. It becomes active if you move the cursor over it. You can use it to set acquisition pa-
rameters and acquire tile images for your panorama image.
Parameter Description
Center to Live Nav- Centers the stage view at the current position of the Navigator frame.
igator
Parameter Description
Action Buttons With the three action buttons (Live, Set Exposure, Continuous)
you're able to control acquisition parameters like you are used to do it
on the Acquisition tab.
Acquire Tile Image Acquires a tile image. This comprises all activated channels as well as
Z-stacks. After the acquisition, the tile image is placed in the corre-
sponding location in the stage view.
With this tool you can acquire a panorama image exceeding the size of a single image.
Parameter Description
Start Live Starts the acquisition. The button disappears. The animated Stop but-
Panorama ton appears in the window above the button.
For more information, see Acquiring a Panorama Image Automati-
cally [} 260].
This module enables you to import the BIO-Formats by OME (Open Microscopy Environment).
More details on the supported BIO-Formats, see https://www.openmicroscopy.org/ .
Info
Time Deviation
In the very unlikely event that there is a time deviation greater than five minutes between the
client and the server, the authentication of the client fails for security reasons. In this case an
error message is displayed and working with ZEN Data Storage is not possible. You have to fix
the system time of the client and/or server before a retry.
You can store your data on your computer's file system. Additionally, you have the option to save
your projects and images in a database called ZEN Data Storage. This makes the information more
accessible, as you can search within the database and filter your results. The data storage is an ad-
ditional product which has to be installed. For more information, refer to the installation guide of
ZEN Data Storage.
To activate the access and use of the database, you need to go to Tools > Modules Manager to
activate Data Storage Client. Afterwards restart the software.
After the installation of ZEN Data Storage (for information see the Installation Guide ZEN Data
Storage), you have to set up the server in ZEN. This setup has to be done once on every machine
using the data storage.
ü In Tools > Modules Manager, the Data Storage Client module is activated.
1. Click Tools > Options > Settings.
2. On the Simple tab, change the Host Name if necessary.
3. Select the Hosting Scheme you use for the server.
4. Set the Storage Server Port.
5. Click Server Setup.
6. Click Yes to confirm the message and setup ZEN Data Storage.
à The server is set up in ZEN and a setup dialog opens.
7. Click Close to exit the setup dialog.
à Your server settings are also automatically validated and the result is displayed below the
Validate Settings button.
8. Click OK to close the Tools > Options dialog and save all settings.
You have successfully set up your ZEN Data Storage server in ZEN.
See also
2 Storage Settings Tab [} 852]
When setting up your ZEN Data Storage on the Settings tab of the Tools > Options dialog, your
server settings are usually validated automatically. For a manual validation and to look up the sta-
tus, take the following steps:
You can save any image to the data storage. You can also open images from the data storage,
update them, and save the updated image to the data storage.
Info
Changes to local project
If you save a ZEN Connect project to the ZEN Data Storage, the local project file is changed
and cannot be used any more to open the project in ZEN. The project has to be opened from
the ZEN Data Storage, see Opening or Deleting a ZEN Connect Project from ZEN Data Stor-
age [} 273].
You can save existing ZEN Connect projects that are currently saved on your computer to the data
storage. If you save a project with images, the image information is contained in the Project
View. You can check it in the Layers View.
Optionally, you can create a new project and save it immediately to the data storage.
Prerequisite ü You have loaded a ZEN Connect project that is saved to your computer, or you are in the
process of creating a new Connect project.
1. Click File > ZEN Data Storage > Save and Convert ZEN Connect Project.
The ZEN Connect project is saved to ZEN Data Storage.
See also
2 Saving an Image to ZEN Data Storage [} 270]
Info
Saving changes
If you have made changes to a ZEN Connect project opened from ZEN Data Storage, you have
to first save the project back to the database (via File > ZEN Data Storage > Save ZEN Con-
nect Project) before you export it.
In ZEN, you can export a ZEN Connect project from the ZEN Data Storage to use it locally, e.g. on
a machine with no access to the database.
Prerequisite ü You have opened your project from ZEN Data Storage in ZEN, see Opening or Deleting a ZEN
Connect Project from ZEN Data Storage [} 273].
1. In the ZEN Connect tool, select Export Project... for the Export button.
2. In the file browser, select the location where you want to export your project.
3. Click OK.
à Your project is exported to the selected location. The state of the export is displayed in
the progress bar. For each exported project a subfolder is created in the selected export
location.
Info
If you have a ZEN Connect project open, you cannot upload the image to ZEN Data Storage.
The folder in the Auto Save tool is automatically set to the path of the project and you cannot
select an upload to the server.
Info
Using Direct Processing and ZEN Data Storage
If you are using the ZEN Data Storage server as the location for storing your data, all acquisi-
tion and processing computers involved must have access to the transfer share defined in ZEN
Data Storage.
Also note that the images are only uploaded to ZEN Data Storage after they are now longer
open in ZEN.
4.9.8 Opening or Deleting a ZEN Connect Project from ZEN Data Storage
Info
Working locally
If you have opened a ZEN Connect project from ZEN Data Storage, but you want to work with
it locally, use the export functionality before you make any changes to the opened project, see
Exporting a ZEN Connect Project from ZEN Data Storage [} 271].
Prerequisite ü You have saved a ZEN Connect project to ZEN Data Storage, see Saving a ZEN Connect
Project to ZEN Data Storage [} 271].
1. Click File > ZEN Data Storage > Open ZEN Connect project.
à The Stored Documents dialog opens.
2. Select the ZEN Connect project and click Open.
In the ZEN Connect Project View, the current state of the project is displayed. In the Image
View, the sample holders are marked and previously acquired images are displayed. Note that the
project including its images is linked in the data storage. So take care that these files are not
moved or deleted as the links will be broken.
The current stage position is marked with a cross hair.
See also
2 Stored Documents Dialog [} 277]
In the Stored Documents dialog, you select Connect projects or images to open or to delete.
You can configure the columns of the table according to your needs.
1. Click File > ZEN Data Storage > Open ZEN Connect Project or File > ZEN Data Stor-
age > Open Image.
à The Stored Documents dialog opens.
2. Right-click into the header of the table and activate the columns you want to see in the ta-
ble.
You have configured the table.
See also
2 Stored Documents Dialog [} 277]
1. Click File > ZEN Data Storage > Open ZEN Connect Project or File > ZEN Data Stor-
age > Open Image.
à The Stored Documents [} 277] dialog opens.
2. Click .
à The search area with filter panel and metadata of the search results is displayed. The
number in brackets indicates the amount of search results.
3. To limit the search results, select a term of interest.
à The available images are displayed accordingly.
4. Optionally, enter a term you are looking for in the Search field, e.g., the file name.
5. To filter images based on their tags, click Tags.
à An input field opens as dropdown.
6. In the search field, enter a term you want to filter for. Alternatively, in the list of available
tags, activate all tags you want to filter for and activate Or/And, depending on whether the
filtered images should contain all activated tags (And), or at least one of the tags (Or).
à The filter for tags is applied.
The available files are filtered and displayed accordingly.
4.9.11 Adding and Deleting Tags for ZEN Data Storage Images
To structure your image data in ZEN Data Storage, you can also add tags to individual images.
2. Under the preview of the image, open the Metadata section by clicking .
à An additional section with the image metadata opens.
3. For Tags, click +.
à A text field is displayed.
4. Enter the respective name and press Enter.
à The tag is added to the image.
5. To delete a tag, click X in the tag name.
à The tag is removed.
You have added (or deleted) a tag for an image in ZEN Data Storage.
You can create collections to structure your data and share it with others.
Prerequisite ü You have started the application with active user management to be able to add users or
groups to a collection.
1. Go to Tools > Options > Collections.
2. Click .
à The Add Collection dialog opens.
3. Enter a name for the new collection.
4. Click .
See also
2 Add/Edit Collection Dialog [} 853]
2 Add Collection Access Dialog [} 853]
2 Editing or Deleting a Data Collection [} 275]
Prerequisite ü You have created a collection for your data. For more information, see Creating a Collection
for Data [} 274].
ü You have started the application with active user management to be able to add users or
groups to a collection or edit them.
1. Go to Tools > Options > Collections.
2. To delete a collection, select it and click .
3. To edit a collection, select it and click .
à The Edit Collection dialog opens.
4. If you want to change the name, adapt it under Collection Name.
5. If you want to change the access of a particular user, change the Access Level with the re-
spective dropdown.
6. To add a new user or group, click .
à The Add Collection Access dialog opens.
7. On the Groups and/or Users tab, select the group or user you want to grant access to the
collection. Selection of multiple users and groups is possible by pressing Ctrl.
8. Click OK.
à The Add Collection Access dialog closes and the selected user(s) and/or group(s) are
granted access based on the selection.
9. Click OK.
à The Edit Collection dialog closes.
10. Click OK.
à The Tools > Options dialog closes.
You have edited (or deleted) this collection.
In ZEN Data Storage you can share your data with a collection.
Prerequisite ü You have created collections for ZEN Data Storage, see Creating a Collection for Data
[} 274].
1. Click File > ZEN Data Storage > Open Image if you want share an image, or click File >
ZEN Data Storage > Open ZEN Connect Project if you want to share a project.
à The Stored Documents dialog opens.
2. Select the files you want to share. To select multiple files, press Ctrl while clicking on the
files.
3. Right click on the selected images or projects and select Add to Collection. Alternatively,
click .
à The Add to Collection dialog opens.
4. In the table, activate the checkbox for every collection you want to share the data with.
5. Click Save.
You have shared your images or projects with other users/groups that are part of a collection.
Shared files are marked by in the Shared column.
In ZEN Data Storage you can directly share your data with other users or groups.
1. Click File > ZEN Data Storage > Open Image if you want share an image, or click File >
ZEN Data Storage > Open ZEN Connect Project if you want to share a project.
à The Stored Documents dialog opens.
2. Select the file you want to share.
3. Right click on the selected images or projects and select Share and Manage Access. Al-
ternatively, click .
à The Share and Manage Access dialog opens.
4. Enter the name of the user or group you want to share the data with and select it from the
list.
5. In the access dropdown, select the access level for the respective user or group and click
Add.
à The user or group is added to the table and the access right is displayed.
6. Repeat the previous steps until all users and groups that you want to share your data with
are added to the table.
7. Click Save.
You have directly shared your image or project with other users and groups.
Shared files are marked by in the Shared column.
Info
Domain Administrator
It is not possible to use the initial Active Directory domain administrator as a user for ZEN Data
Storage. The administrator account for the Active Directory domain should only be used for
the administration of the Active Directory itself.
When you are using ZEN with ZEN Data Storage, you can add individual Active Directory users to
your user management.
Prerequisite ü ZEN is open with active user management and you are signed in as administrator.
ü You are using ZEN Data Storage and have configured an Active Directory group to be able to
use Active Directory in ZEN, see Setting Up the Login with Windows Credentials (Active Di-
rectory) [} 48].
ü During the installation of ZEN Data Storage, you have set the parameter Enable Active Di-
rectory to True on the Settings tab of the installer. For more information also refer to the
installation guide for ZEN Data Storage.
ü The ZEN Data Storage server must be part of the same Windows domain from where the soft-
ware tries to login with its Windows credentials.
1. Go to Tools > Users and Groups.
à The User and Group Management dialog opens.
2. Go to Users.
à The tab displays all currently configured users.
3. Click .
à The New User dialog opens.
4. For Type, select Active Directory.
5. For Name, click .
à The Select User dialog opens.
à The fields for object type and location are filled with a default. To change them, click Ob-
ject Types or Locations to open another dialog to select the respective Object Types
or Locations.
6. In the text field below, enter the name of the user you want to select. If you are not sure if
your name is correct, click Check Names to open a dialog and select the suitable entry.
7. Click OK.
à The name is displayed in the New User dialog.
8. Click OK to close the New User dialog.
à The respective Active Directory user is added to the list of users.
9. Click OK to close the User and Group Management dialog.
You have configured an Active Directory user. You can now assign this user to a group to grant
him certain rights and privileges.
With this dialog you can open images or ZEN Connect projects saved in the ZEN Data Storage.
1 Stored Data
Displays the data available in ZEN Data Storage (images or ZEN Connect projects) and
provides functionality to filter the data, see Stored Data Section [} 278].
2 Image Preview
Displays a preview of the currently selected image. This functionality is only available if
you try to open images from the data storage, there is no preview available for ZEN Con-
nect projects. For image documents that contain attachments like thumbnails, label or
preview scans, a special control in the Image Control Section allows you to switch the
view to those attachments.
See also
2 Saving a ZEN Connect Project to ZEN Data Storage [} 271]
2 Saving an Image to ZEN Data Storage [} 270]
2 Opening or Deleting an Image from ZEN Data Storage [} 272]
2 Opening or Deleting a ZEN Connect Project from ZEN Data Storage [} 273]
Parameter Description
Search Searches the database for the term entered in the text field.
Tags Opens an input field to filter images based on their tags, see Filtering
Connect Projects and Images in ZEN Data Storage [} 274].
– Search Field Searches the image tags for the input of this field
Parameter Description
– And Activated: Applies a logical And operator if multiple tags are se-
lected in the list below and filters only images that contain all selected
tags.
– Tag List Displays all available tags. Filters the documents in ZEN Data Storage
based on the activated tags of this list.
File Dropdown Selects which files are displayed in the table below. If you have access
to collections, they are also displayed as an option in the dropdown,
see Creating a Collection for Data [} 274].
– Files Shared Displays all files that are shared with me, including public images/
With Me projects.
Document Table Displays all available images or ZEN Connect projects and information
about them. The table entries can be sorted according to each individ-
ual column. You can customize the view of the table by toggling the
visibility of individual columns, see Configuring the Stored Docu-
ments Table [} 273].
The displayed information for the images are extracted from the im-
age metadata. If you have configured ZEN Data Storage to extract
custom metadata, they can also be displayed as columns. For informa-
tion about configuring custom metadata, see the installation guide of
ZEN Data Storage.
Additional functionality for the images or projects is provided by an
extra right click menu, see Stored Documents Right Click Menu
[} 280].
– File Name Displays the file name and the format of the image. You can sort the
file names alphabetically.
– Original File Displays the original file name in case you have uploaded the image
Name as a third party image with the ZEN Data Storage Uploader.
– Software Ap- Displays with which software application the image was acquired.
plication
– Last Changed Displays at what date the image was last changed.
Parameter Description
– Shared Displays if the file is shared with others.
Opens the Share and Manage Access dialog to directly share the
Share and Manage selected file with a user or group, see Sharing Data Directly With
Access Users and Groups [} 276].
Opens the Add to Collection dialog to share the selected files with a
Add to Collection collection, see Adding Data to a Collection [} 276].
If you right click a document in the list, the following menu items are displayed.
Parameter Description
Share and Manage Opens the Share and Manage Access dialog to share the file with a
Access user or group, see Sharing Data Directly With Users and Groups
[} 276].
Add to Collection Opens the Add to Collection dialog to add the selected file to a col-
lection, see Adding Data to a Collection [} 276].
See also
2 Add to Collection Dialog [} 281]
Parameter Description
Z-Position Only available for z-stack images.
Selects which z-slice is displayed in the preview.
– Tags Displays and edits tags for the image. For more information, see
Adding and Deleting Tags for ZEN Data Storage Images [} 274].
– Image Meta- Displays metadata for the currently selected image. The metadata is
data synchronized with the displayed columns in the stored data section on
the left.
Parameter Description
Search for Collec- Searches the collections according to the input.
tions
Manage Collec- Opens the Manage Collections dialog to manage the collection.
tions
See also
2 Collections Tab [} 852]
2 Adding Data to a Collection [} 276]
Parameter Description
Enter Name Searches all available users and groups according to the input and se-
lects a user or group from a dropdown.
Access Rights
dropdown
– Read Grants the group/user access to see, open and download a docu-
ment.
– Manage Grants the group/user access to modify the access control list.
Add Adds the selected user or group with the selected access right to the
table.
User and Group ta- Displays all users and groups added for sharing the file.
ble
– Access Right Displays the access right for the respective user or group and allows
you to change the access.
Save Shares the file with the users and groups defined here and closes the
dialog.
See also
2 Sharing Data Directly With Users and Groups [} 276]
4.10 ImageJ
§ Easy exchange of images, from simple two-dimensional images, to more complex, multidi-
mensional entities, like Z-stacks, time series and so on. The exchange can go both ways, from
ZEN to ImageJ, as well as from ImageJ to ZEN.
§ Execute functions in ImageJ, without having to leave the ZEN environment.
§ Combine the two benefits, introduced above: sending a ZEN image to ImageJ, having it pro-
cessed there, and then returning the resulting image back to ZEN in one single step.
Note that different versions and variants of ImageJ and Fiji exist. This document is based on the
ImageJ/Fiji version 1.46. See notes for specifics of other versions and variants. For the sake of sim-
plicity, Fiji is implied also, wherever ImageJ is mentioned in the following text.
4.10.1 Preparations
Info
Note that the extension for ImageJ is not available in ZEN lite.
1. Install ImageJ on your computer. Make sure that you use the latest version (check for on-
line updates after installation).
2. Download loci_tools.jar and drop it into in the ImageJ/plugins folder.
3. Note the name of the folder with your preferred alternative. While you can switch freely
among them all, it makes sense to stick to one and the same environment, once you have
started to add your own programs and macros.
4. The ImageJ/Fiji folder you will eventually decide on, can either belong to you alone or be
shared among other users of the system. It is up to you decide, what you prefer: if you are
the only user, nobody will meddle with its contents (images, macros etc), but then, you will
need to copy and distribute the contents, if they are of interest to others as well.
You have successfully fulfilled all prerequisites. You can now continue with setting up ImageJ
within ZEN software.
The extension is automatically included in the ZEN installation. To set it up, start the software and
then proceed as follows:
The extension offers the possibility of sending images to ImageJ to get processed, to retrieve the
result of the operation or both. The following instruction will show the basic steps which are nec-
essary to apply ImageJ methods on any images.
2. In the Parameters tool, specify if the method selected will need an input image and/or
provide a resulting image.
3. In the Apply tool, click on the Apply button to execute the command.
You have successfully applied an method to an image.
ZEN to ImageJ
.ome.tif Original
2D image B/W .czi 32-bit (RGB) Convert the image in ImageJ to the required
pixel type using Image > Type command
2D image 36/42 bit - Convert the CZI image to 24/48 bits before
color .czi sending it or using it in a method
12bit B/W images Error in Im- Workaround: convert the pixel type of the
ageJ image to 16 bits in ZEN
ImageJ to ZEN
.ome.tif Original
Multi-channel x Z-Stack x MD image Hint: select RGB in Quick Color Setup to get
T-series the same colors for channels as in ImageJ
5 Acquisition Toolkits
You can expand the software functionality with the following acquisition packages:
This module allows you to configure inhomogeneous acquisition experiments with support for all
experiment dimensions: time series, z-stacks, tile images and channels. Experiments can consist of
any number of components. A component is referred to as an experiment block. Each experiment
block has a distinct number, which is shown above the block. In the dedicated tool, you can set
up four different types of experiment blocks and make use of a set of powerful processing func-
tions to extract or fuse multiblock images. Experiment Designer allows the definition of a number
of iteration loops, and synchronous or asynchronous control of hardware actions during the ex-
periment.
Info
4 Each acquisition block can be seen as its own independent single experiment with its own
individual settings.
4 Each experiment block can have its own dimensions (e.g. channel settings like exposure
time, active camera, camera parameters; Z; T).
4 If an objective change is required, create a separate Execute block for this change.
4 Focus strategies are block specific as well.
4 You can change the order of experiment blocks via drag & drop in the experiment time-
line.
4 Special actions that influence the course of an experiment are performed with a special
block.
Parameter Description
Mode Only visible for Celldiscoverer 7.
– Standard Selects the standard mode for designing your experiments. The pa-
rameters are described below.
– Multi Carrier Selects the multi carrier mode which allows you to define experiments
for the individual carriers of a multi carrier insert. For the descriptions
of the parameters, see Experiment Designer (Multi Carrier Mode)
[} 1182].
Export Opens a dialog to select which experiment blocks you want to export
to the file system.
Create Creates a new block of the currently selected type. Clicking on the ar-
row opens a dropdown to select which type of experiment block you
want to add. Each created block is displayed in the Timeline of the
experiment below the buttons.
– Create Acqui- Adds a new, empty acquisition block to the experiment timeline.
sition Block
– Create Delay Adds a delay block to the experiment timeline which pauses the ex-
Block periment for a predefined period. After that period, the experiment
continues automatically.
– Create Wait Adds a wait block to the experiment timeline which holds the experi-
Block ment in idle status until you click Continue Experiment. This can be
used for adding a solution or changing the buffer of the specimen. A
message box is displayed when a wait block is reached.
– Create Exe- Adds an execute block to the experiment timeline to execute a se-
cute Block lected hardware setting.
Duplicate Duplicates the selected block and inserts the newly created block after
the last block.
Properties of Block Displays options for the currently selected experiment block.
Parameter Description
– Delay Only visible if a delay block is selected.
Sets the delay with the slider or input field. The delay is displayed in
the block.
Specifies which experiment blocks should be repeated during the experiment. You can define as
many repetitions as you like for each experiment. An experiment block can only be part of one of
the defined repetitions.
Info
If you define several repetitions, the following conditions must be met:
4 Repetitions must form a complete unit.
4 One repetition may not be placed within another.
ð If these conditions are not met, the repetition cannot be performed. In this case a yellow
warning symbol appears in the Active column.
Parameter Description
Loops Sets the number of loops that are performed.
Parameter Description
Time Stitch all Images Activated: All images of a looped acquisition block are put into
of Each Looped Acqui- one image document. The images are automatically stitched to-
sition Block gether using the time (T) index.
Parameter Description
Edit Setting/Light Path Opens the light path dialog to change the relevant hardware set-
ting.
Experiment Settings Displays a list of existing hardware settings you can select.
Pool
From File... Opens a dialog window to select and import a hardware settings
file (*.czhws).
Parameter Description
Import from Selects the experiment you want to import experiment blocks from.
The experiment must be saved on your computer.
Select Desired Selects the experiment blocks you want to import. The selected block
Blocks is marked in blue. If you want to import all blocks of an experiment,
do not select a block but continue by clicking on the Import button
directly.
Experiment feedback (also known as conditional or adaptive experiments) allows you to define
specific rules and actions to be performed during the acquisition of an experiment. It is possible to
change the course of an experiment depending on the current system status or the nature of the
acquired data during acquisition. Moreover, it is possible to integrate certain tasks like data log-
ging or starting of an external application, directly into the ZEN experiment. A typical use case is
to connect the image acquisition with automated image analysis.
Feedback experiments can be set up and controlled with the Experiment Feedback Tool [} 294]
and the Script Editor [} 294]. For an example workflow for experiment feedback, see Workflow
Experiment Feedback [} 290].
Note that we do not describe experiment feedback in detail here, as you can find a detailed in-
struction on how to perform feedback experiments and a lot of tutorials with the latest ZEN in-
stallation, if you activate the entry OAD Samples in the ZEISS Microscopy Installer. The tutori-
als and documentation are then placed in a folder on your PC like C:\Oad\Experiment Feed-
back.
Key Features
§ Create smart experiments and modify the acquisition on-the-fly based on online image analy-
sis, hardware changes or external inputs (e.g. TTL signals).
§ The adaptive acquisition engine allows modifying running experiments according to the rules
defined inside the feedback script.
§ The feedback script uses ZEN commands in combination with the Python programming lan-
guage.
§ The feedback script gives access to the current system status and results from the online im-
age analysis during runtime of an experiment.
§ Data Logging or starting an external application (e.g. Python, Fiji, MATLAB, etc. ), directly
from within the imaging experiment is possible.
Step Description
Define Experiment Set up and configure the actual image acquisition experiment to
obtain the desired image data, e.g. time lapse, Z-stack, multi-
channel, tile acquisition, etc. Once the setup of the acquisition is
completed, you acquire sample data which will be used in the
following step to setup and test the image analysis setting.
Define Image Analysis Sets up an image analysis setting via the Image Analysis Wizard
for the use inside the feedback script if an analysis step is re-
quired. Only parameters specified in the image analysis setting
can later be accessed from within the experiment feedback
script. Test the image analysis setting to ensure the results of the
image analysis are meaningful.
For more information on the Image Analysis Wizard, see Creat-
ing a New Image Analysis Setting [} 403].
For advanced analysis requirements it is also possible to use an
OAD macro to create an image analysis setting (*.czias file).
Define Rules and Ob- This step defines how the script actually works. Here you define
servables the rules in the feedback script, e.g. which parameters are ob-
served and how the experiment should react when a certain
event occurs.
Start Experiment Start the Experiment Feedback experiment and watch the out-
put. The general concept behind this workflow can be described
as a loop, which is the actual acquisition. For every event, e.g.
when a new image has been acquired, the script will be exe-
cuted. The rules are checked and if required, certain tasks are
carried out. Additionally, it is possible to log data into a text file
and/or start an external application at any time point during the
experiment.
See also
2 Script Editor for Experiment Feedback Dialog [} 294]
To use the Experiment Feedback tool, you need to edit the feedback script.
The main loop script run will only be triggered, when an observable that is used in the main loop
script has changed. If the parameters within the loop script do not change, the script will not be
executed.
These changes can be:
1. Select Acquisition tab > Experiment Feedback tool > Edit Feedback Script.
à The Script Editor for Experiment Feedback dialog opens. For more information, see
Script Editor for Experiment Feedback Dialog [} 294].
2. Create and edit the feedback script in the following sections: the PreLoop Script, the
Loop Script and the PostLoop Script. To do so, use the commands on the Command
tab. Add observables and actions to your experiment feedback script either via double-click
or drag-and-drop. All observables and actions are also available via IntelliSense auto-com-
pletion starting with ZENService.
3. If an image analysis is part of your feedback experiment, you can select an existing image
analysis you previously created. Select Command tab > Available Observables section >
Analysis drop-down menu.
à A list of the features that were defined in this image analysis setting, e.g. number of cells
detected, is displayed. If you selected an image analysis setting and use one or more fea-
tures in the feedback script, this image analysis setting is executed for each acquired im-
age. Note that only features you have previously defined in the Image Analysis Wizard
are available from within the Feedback Experiment. If you make any changes within the
image analysis wizard to an existing *.czias file, you need to reload it in the feedback
script editor to activate the changes in the feedback experiment.
4. Click OK.
à The existing code is validated. You can only save and close the script editor if the code is
free from syntax mistakes.
The feedback script will be stored as part of the experiment file *.czexp.
The script runtime conditions describe how the different steps of the experiment Feedback
Script are executed. You can choose between Free Run and Synchronized execution.
1. Select Acquisition tab > Experiment Feedback tool, and select Free Run.
à The execution of all steps of the Feedback experiment are independent and not executed
sequentially.
1. Select Acquisition tab > Experiment Feedback tool, and select Synchronized.
2. Click on the blue buttons to define the script slot and to change the execution order of the
script.
à The selected steps are executed sequentially.
§ The script run starts after the acquisition together with the online image analysis. Therefore,
the script execution, image analysis and writing of the image subblock to the hard drive are
not in sync.
§ The online image analysis starts after the acquisition of a frame is finished. Only when the im-
age analysis is finished, the next script run is triggered. This guarantees that all analysis results
exist and can be used in the feedback script. The writing to disk is not synchronized.
§ The online image analysis starts after the acquisition of a frame is finished. The image data are
written to disk and only when all tasks are finished the next script run is triggered. This guar-
antees, that all analysis results exist and the image data is stored on disk before the next
script run is triggered. This option is relevant in case the script starts an external application to
analyze the data.
Parameter Description
Edit Feedback Opens the Script Editor [} 294] dialog. There you can create scripts
Script... for an Experiment Feedback.
- Free Run Upon the experiment start the acquisition and the feedback script are
started but run from here in a completely unsynchronized manner.
The online image analysis or the script run itself will not slow down
the actual image acquisition.
- Synchronized This mode will lead to strictly determined order of events depending
on the chosen level of synchronization. The online image analysis and/
or the feedback script will be started after current acquisition is fin-
ished.
In contrast to the Free Run mode, a synchronized run can slow the
whole acquisition down. The big advantage of the mode is, that the
synchronized run ensures a predictable workflow.
Define Script Slot Here you define the experiment feedback sequence by arranging the
slots (represented by blue buttons). The blue slots run one after an-
other the non-blue slots are run separately.
§ The Acquisition slot represents the actual image acquisition.
§ The Analysis slot represents the online image analysis.
§ The Script Run* slot represents the execution of the experiment
feedback loop script. Note that the loop script will be only exe-
cuted when triggered by a used observable inside the loop script.
§ The HD Writing slot represents the slot for writing the image data
to your hard drive.
Allow additional Triggers the main loop if observables that are not part of the multidi-
loop script runs mensional experiment (e.g. frame index, time index, block index, etc.)
change. Those observables could be time or temperature of the incu-
bation. If the checkbox is not activated only experiment observables
trigger the main loop script. This only applies when the acquisition is
idle. This only applies when acquisition is idle.
See also
2 Editing the Feedback Script [} 291]
The three windows on the left allow you to input scripts based on the programming language
Python:
Window Description
Pre Loop - Single Exe- Import modules and define functions or variables. This part is ex-
cution on Experiment ecuted only once at the beginning of the feedback experiment.
Start
Loop Script - Repetitive Executed every time an observable used within the loop script
Execution During Ex- changes. Allows to modify the experiment on-the-fly and react
periment Runtime on results from the online image analysis, e.g. stop the acquisi-
tion when a defined number of cells have been counted, or to
take action upon external signals.
Post Loop Script - Sin- Define actions that are executed only once when the acquisition
gle Execution on Exper- is finished, e.g. write the data in a logfile or play a sound when
iment Stop the experiment is finished.
Accept Adopts the script within the experiment without closing the win-
dow.
The Commands tab on the right contains all commands for observables, actions and editor tools.
Parameter Description
Available Observables Observables are conditions or parameters that can be deter-
mined and observed during the course of the experiment. Select
observables, actions and tools by clicking on the black triangle at
the right-hand edge and dragging the desired action from the list
to the desired input area.
The following observables are available:
Parameter Description
Available Actions Actions are possible actions and reactions that can be performed
during the experiment. These can vary greatly and include, for
example, changing microscope hardware, changing camera pa-
rameters, generating notifications or audible alerts, calling up
other programs or canceling the experiment.
- Experiment Ac- Commands that can be used to modify a running experiment on-
tions the-fly. It also includes modifying hardware parameters which
are typically part of an acquisition experiment, like exposure
times or light source intensities.
- Extra Actions Commands that can be used to log any kind of data into a text
file or to start an external application outside ZEN at any time
during a running experiment.
Editor Tools The Editor tools include sample scripts, allow to play sounds or
write debugging outputs.
Validate Script Checks your script for errors. An information is displayed if your
script is valid or not.
Information
- IO Card Port La- Contains the exact naming of the available IO ports for the cur-
bels rent system.
Info
It is crucial to understand that observables are more than just values "that can be observed".
They are required to trigger a script-run of the Loop Script.
If there are no observables used or they remain unchanged, the script will not be executed.
The available commands inside the analysis sections depend on the definition of the image analy-
sis pipeline and the defined ROIs from the Physiology module. Therefore this is a dynamically cre-
ated list that will always look different. The list will only contain those parameters, which
are defined inside the CZIAS measurement file. Such a file can be created using the im-
age analysis wizard or an OAD macro.
CurrentBlockIndex
Complete Command ZenService.Experiment.CurrentBlockIndex
Input
Output integer
CurrentSceneIndex
Complete Command ZenService.Experiment.CurrentSceneIndex
Input
Output integer
Description Returns the index of the current position for multi-position experi-
ments.
CurrentTileIndex
Complete Command ZenService.Experiment.CurrentTileIndex
Input
Output integer
CurrentTimePointIndex
Complete Command ZenService.Experiment.CurrentTimePointIndex
Input
Output integer
Description Returns the current time point index, e.g. the frame number from a
time-lapse experiment.
CurrentTrackIndex
Complete Command ZenService.Environment.CurrentTrackIndex
Input
Output integer
CurrentZSliceIndex
Complete Command ZenService.Experiment.CurrentZSliceIndex
Input
CurrentZSliceIndex
Output integer
ElapsedTimeInMinutes
Complete Command ZenService.Experiment.ElapsedTimeInMinutes
Input
Output double
Description Returns the elapsed time in minutes for the current experiment.
ElapsedTimeInSeconds
Complete Command ZenService.Experiment.ElapsedTimeInSeconds
Input
Output double
Description Returns the elapsed time in seconds for the current experiment.
ImageFileName
Complete Command ZenService.Experiment.ImageFileName
Input
Output string
Description Returns the name of the current experiment as a string. The String
ending will be *.czi and *.czmbi for multi-block experiments, respec-
tively.
IsExperimentPaused
Complete Command ZenService.Experiment.IsExperimentPaused
Input
Output boolean
Description Returns the paused state of the current experiment, i.e. it will be True
if the experiment is paused.
IsExperimentRunning
Complete Command ZenService.Experiment.IsExperimentRunning
Input
Output boolean
Description Returns the running state of the current experiment, i.e. it will be True
if the experiment is running.
HasChanged
Complete Command ZenService.Environment.HasChanged(String observableId)
Input
Output boolean
CurrentDateDay
Complete Command ZenService.Environment.CurrentDateDay
Input
Output integer
CurrentDateMonth
Complete Command ZenService.Environment.CurrentDateMonth
Input
Output integer
CurrentDateYear
Complete Command ZenService.Environment.CurrentDateYear
Input
Output integer
CurrentTimeHour
Complete Command ZenService.Environment.CurrentTimeHour
Input
Output integer
CurrentDateTimeMinute
Complete Command ZenService.Environment.CurrentTimeMinute
Input
Output integer
CurrentDateTimeSeconds
Complete Command ZenService.Environment.CurrentTimeSeconds
Input
Output integer
CurrentDateTimeMilliseconds
Complete Command ZenService.Environment.CurrentTimeMilliseconds
Input
Output integer
FreeDiskSpaceInMBytes
Complete Command ZenService.Environment.FreeDiskSpaceInMBytes
Input
Output double
Description Returns the free disk space on the hard disk where the experiment
data will be saved.
HasChanged
Complete Command ZenService.Environment.HasChanged(String observableId)
Input
Output boolean
IncubationAirHeaterIsEnabled
Complete Command ZenService.Hardware.IncubationAirHeaterIsEnabled
Input
Output boolean
IncubationAirHeaterTemperature
Complete Command ZenService.Hardware.IncubationAirHeaterTemperature
Input
Output double
IncubationChannelXIsEnabled (X=1-4)
Complete Command ZenService.Hardware.IncubationChannelXIsEnabled
Input
Output boolean
IncubationChannelXTemperature (X=1-4)
Complete Command ZenService.Hardware.IncubationChannelXTemperature
Input
Output double
IncubationCO2Concentration
Complete Command ZenService.Hardware.IncubationCO2Concentration
Input
Output Double
IncubationCO2IsEnabled
Complete Command ZenService.Hardware.IncubationCO2IsEnabled
Input
Output Boolean
IncubationO2Concentration
Complete Command ZenService.Hardware.IncubationO2Concentration
Input
Output Double
IncubationO2IsEnabled
Complete Command ZenService.Hardware.IncubationO2IsEnabled
Input
Output Boolean
IncubationCoolingChannelIsEnabled
Complete Command ZenService.Hardware.IncubationCoolingChannelIsEnabled
Input
Output Boolean
IncubationCoolingChannelTemperature
Complete Command ZenService.Hardware.IncubationCoolingChannelTemperature
Input
Output Double
IncubationHumidityIsEnabled
Complete Command ZenService.Hardware.IncubationHumidityIsEnabled
Input
Output Boolean
IncubationHumidityValue
Complete Command ZenService.Hardware.IncubationHumidityValue
Input
Output Double
TriggerDigitalInX (X = ...)
The range of values for X depends on the IOcard configuration.
Input
Output Boolean
TriggerDigitalOutX (X = ...)
The range of values for X depends on the IOcard configuration.
Input
Output Boolean
TriggerDigitalOutRLShutter
Complete Command ZenService.Hardware.TriggerDigitalRLShutter
Input
Output Boolean
Description Returns the state of the reflected light shutter, where True = Open
and False = Closed.
TriggerDigitalOutTLShutter
Complete Command ZenService.Hardware.TriggerDigitalOutTLShutter
Input
Output Boolean
Description Returns the state of the transmitted light shutter, where True = Open
and False = Closed.
Depending on the present hardware they shown options might vary, e.g. it is only possible to set
the laser intensity, if there is a laser engine configured.
ContinueExperiment
Complete Command ZenService.Actions.ContinueExperiment()
Input
Output
ContinueExperiment
Description This will continue the current experiment in case it was paused.
JumpToBlock
Complete Command ZenService.Actions.JumpToBlock(int newBlockIndex)
Output
Description This will jump to the specified experiment block inside a heteroge-
neous experiment.
JumpToContainer
Complete Command ZenService.Actions.JumpToContainer(string containerId)
Output
Description This will jump to the specified container of the sample carrier.
JumpToNextBlock
Complete Command ZenService.Actions.JumpToNextBlock()
Input
Output
Description This will jump to next acquisition block defined inside the Experiment
Designer tool
JumpToNextContainer
Complete Command ZenService.Actions.JumpToNextContainer()
Input
Output
Description This will jump to next container inside the currently used sample car-
rier.
JumpToNextRegion
Complete Command ZenService.Actions.JumpToNextRegion()
Input
Output
JumpToPreviousBlock
Complete Command ZenService.Actions.JumpToPreviousBlock()
JumpToPreviousBlock
Input
Output
Description This will jump to the previous acquisition block inside the experiment
designer tool.
MoveTileRegion (1)
Complete Command ZenService.Actions.MoveTileRegion(int regionIndex, double x,
double y)
Output
Description Updates the specified tile region inside the tile region list with the
given X and Y coordinates.
MoveTileRegion (2)
Complete Command ZenService.Actions.MoveTileRegion(int regionIndex, double x,
double y, double z)
Output
Description Updates the specified tile region inside the tile region list with the
given X, Y and Z stage coordinates.
MoveTileRegionByOffset
Complete Command ZenService.Actions.MoveTileRegionByOffset(int regionIndex,
double offsetX, double offsetY, double offsetZ)
Input integer regionIndex, double offsetX [nm], double offsetY [nm], double
offsetZ [nm]
Output
Description Updates the specified tile region inside the tile region list with the
given offset in X, Y and Z.
PauseExperiment
Complete Command ZenService.Actions.PauseExperiment()
Input
Output
ReadLEDIntensity
Complete Command ZenService.Actions.ReadLEDIntensity(int trackindex, double
wavelength)
ReadLEDIntensity
Input integer trackindex, double wavelength [nm]
Description This will read the intensity for the specified track and LED.
ReadTLHalogenLampIntensity
Complete Command ZenService.Actions.ReadTLHalogenLampIntensity(int
trackindex)
Description This will read the intensity for the specified track and LED.
SetExposureTime (1)
Complete Command ZenService.Actions.SetExposureTime(int channelindex, double
exposure)
Output
Description This will set the exposure time of the camera for the specified chan-
nel.
SetExposureTime (2)
Complete Command ZenService.Actions.SetExposureTime(int trackindex, int chan-
nelindex, double exposure)
Output
Description This will set the exposure time of the camera for the specified track
and channel.
SetLEDIntensity
Complete Command ZenService.Actions.SetLEDIntensity(int trackindex, double
wavelength, double intensity)
Output
Description This will set the intensity for the specified track and channel.
SetLEDIsEnabled
Complete Command ZenService.Actions.SetLEDIsEnabled(int trackindex, double
wavelength, bool isEnabled)
SetLEDIsEnabled
Output
Description This will enable the specified LED for the specified track. The prede-
fined intensity value will be used.
SetMarkerString
Complete Command ZenService.Actions.SetMarkerString(string markerText)
Output
SetTimeSeriesInterval (1)
Complete Command ZenService.Actions.SetTimeSeriesInterval(double interval, Time-
Unit unit)
Output
Description This will set the interval of a time series to the specified value.
SetTimeSeriesInterval (2)
Complete Command ZenService.Actions.SetTimeSeriesInterval(double interval)
Output
Description This will set the interval of a time series to the specified value in [ms].
SetTLHalogenLampIntensity
Complete Command ZenService.Actions.SetTLHalogenLampIntensity(int trackindex,
double intensity)
Output
Description This will set the intensity of the TL Halogen lamp for the specified
track.
StopExperiment
Complete Command ZenService.Actions.StopExperiment()
Input
Output
Description This will stop the current running experiment (similar to pressing the
Stop button).
UpdateZStackCenterPosition
Complete Command ZenService.Actions.UpdateZStackCenterPosition(double center-
Position)
Output
Description Updates the center position of the defined Z-stack with the specified
center position.
UpdateZStackCenterPositionByOffset
Complete Command ZenService.Actions.UpdateZStackCenterPositionByOffset(dou-
ble centerPositionOffset)
Output
Description Moves the center position of the defined Z-stack by the specified off-
set.
LSM - ReadAnalogInTriggerState
Complete Command ZenServiceLSM.Actions.ReadAnalogInTriggerState(int port)
Output
LSM - ReadAnalogOutTriggerState
Complete Command ZenServiceLSM.Actions.ReadAnalogOutTriggerState(int port)
Output
LSM - ReadDigitalGain
Complete Command ZenServiceLSM.Actions.ReadDigitalGain(int trackIndex, int
channelIndex)
Description Returns the value for the DigitalGain for the specified track/channel.
LSM - ReadLaserIntensity
Complete Command ZenServiceLSM.Actions.ReadLaserIntensity(int trackIndex, int
lineWavelength)
Description Returns the laser intensity [%] for the specified laser wavelength.
LSM - ReadMasterGain
Complete Command ZenServiceLSM.Actions.ReadMasterGain(int trackIndex, int de-
tectorIndex)
Description Returns the value for the masterGain for the specified track/dectector
combination.
LSM - ReadPinholeDiameter
Complete Command ZenServiceLSM.Actions.ReadPinholeDiameter(int trackIndex)
Description Returns the value for the pinhole in [micron] for the specified track.
LSM - ReadScanDirection
Complete Command ZenServiceLSM.Actions.ReadScanDirection()
Input
LSM - ReadScanSpeed
Complete Command ZenServiceLSM.Actions.ReadScanSpeed()
Input
LSM - ReadTLLEDIntensity
Complete Command ZenServiceLSM.Actions.ReadTLLEDIntensity(int trackIndex)
LSM - ReadTLLEDIntensity
Description Returns the intensity value [%] for the TL-LED.
LSM - SetAnalogOutTriggerState
Complete Command ZenServiceLSM.Actions.SetAnalogOutTriggerState(int port,
double state)
Output
LSM - SetDigitalGain
Complete Command ZenServiceLSM.Actions.SetDigitalGain(int trackIndex, int chan-
nelIndex, double digitalGain)
Output
Description Sets the value for the DigitalGain for the specified track/channel.
LSM - SetLaserEnabled
Complete Command ZenServiceLSM.Actions.SetLaserEnabled(int trackIndex, int
lineWavelength, bool isEnabled)
Output
Description Sets the state for the specified laser inside the specified track.
LSM - SetLaserIntensity
Complete Command ZenServiceLSM.Actions.SetLaserIntensity(int trackIndex, int
lineWavelength, double intensity)
Output
Description Sets the laser intensity for the specified laser inside the specified track.
LSM - SetMasterGain
Complete Command ZenServiceLSM.Actions.SetMasterGain(int trackIndex, int de-
tectorIndex, double masterGain)
Output
Description Sets the value for the MasterGain for the specified track/detector.
LSM - SetPinholeDiameter
Complete Command ZenServiceLSM.Actions.SetPinholeDiameter(int trackIndex,
double pinholeDiameter)
Output
LSM - SetScanDirection
Complete Command ZenServiceLSM.Actions.SetScanDirection(ScanDirections direc-
tion)
Output
LSM - SetScanSpeed
Complete Command ZenServiceLSM.Actions.SetScanSpeed(int speed)
Output
LSM - SetTLLEDIntensity
Complete Command ZenServiceLSM.Actions.SetTLLEDIntensity(int trackIndex, dou-
ble intensity)
Output
ExecuteHardwareSetting
Complete Command ZenService.HardwareActions.ExecuteHardwareSetting(string
settingNameInExperimentSettingsPool)
Output
Description Loads a specific experiment setting from the list of experiment set-
tings. The settings will be overwritten within the next experi-
ment loop, if the settings are included in the regular experi-
ment settings.
ExecuteHardwareSettingFromFile
Complete Command ZenService.HardwareActions.ExecuteHardwareSettingFrom-
File(string hardwareSettingFilePath)
Output
Description Loads a specific experiment setting from a file directly. The settings
will be overwritten within the next experiment loop, if the set-
tings are included in the regular experiment settings.
PulseTriggerDigitalOut
Complete Command ZenService.HardwareActions.PulseTriggerDigitalOut(string
port-Label, double duration)
Output
Description Produces a TLL pulse with a specified duration at the selected trigger
port.
PulseTriggerDigitalOutX (X=7-8)
Complete Command ZenService.HardwareActions.PulseTriggerDigitalOutX(double
duration)
Output
ReadFocusPosition
Complete Command ZenService.HardwareActions.ReadFocusPosition(double Posi-
tion- InMicrometer)
Output
Description Get the current value of the Z-drive. This function was placed under
the category HardwareActions, and not under Observables, since
they trigger the execution of the main loop of the script automatically
when changing.
ReadStagePositionX
Complete Command ZenService.HardwareActions.ReadFocusPosition(double Posi-
tion- InMicrometer)
Output
ReadStagePositionX
Description Get the current value of the stage X-axis. This function was placed un-
der the category HardwareActions, and not under Observables
since they trigger the execution of the main loop of the script auto-
matically when changing.
ReadStagePositionY
Complete Command ZenService.HardwareActions.ReadFocusPosition(double Posi-
tion- InMicrometer)
Output
Description Get the current value of the stage Y-axis. This function was placed un-
der the category HardwareActions, and not under Observables
since they trigger the execution of the main loop of the script auto-
matically when changing.
SetFocusPosition
Complete Command ZenService.HardwareActions.SetFocusPosition(double Position-
InMicrometer)
Output
Description Sets the focus drive to the specified position. Currently the piezo drive
is not accessible from the Experiment Feedback script.
SetIncubationAirHeaterIsEnabled
Complete Command ZenService.HardwareActions.SetIncubationAirHeaterIsEn-
abled(bool IsEnabled)
Output
SetIncubationAirHeaterTemperature
Complete Command ZenService.HardwareActions.SetIncubationAirHeaterTempera-
ture(double temperature)
Output
Description This will set the temperature of the AirHeater to the specified temper-
ature.
SetIncubationChannelIsEnabled
Complete Command ZenService.HardwareActions.SetIncubationChannelIsEn-
abled(int channel, bool isEnabled)
SetIncubationChannelIsEnabled
Input integer channel, boolean isEnabled
SetIncubationChannelTemperature
Complete Command ZenService.HardwareActions.SetIncubationChannelTempera-
ture(int channel, double temperature)
Output
Description Sets the temperature of the selected channel to the specified value.
SetIncubationCO2Concentration
Complete Command ZenService.HardwareActions.SetIncubationCO2Concentra-
tion(double concentration)
Output
SetIncubationCO2IsEnabled
Complete Command ZenService.HardwareActions.SetIncubationCO2IsEnabled(bool
isEnabled)
Output
SetIncubationO2Concentration
Complete Command ZenService.HardwareActions.SetIncubationO2Concentra-
tion(double concentration)
Output
SetIncubationO2IsEnabled
Complete Command ZenService.HardwareActions.SetIncubationO2IsEnabled(bool
isEnabled)
Output
SetIncubationO2IsEnabled
Description Enables the O2 incubation.
SetStagePosition
Complete Command ZenService.HardwareActions.SetStagePosition(double posi-
tionXInMicrometer, double positionYInMicrometer)
Output
SetStagePositionX
Complete Command ZenService.HardwareActions.SetStagePosition(double posi-
tionXInMicrometer)
Output
SetStagePositionY
Complete Command ZenService.HardwareActions.SetStagePosition(double posi-
tionYInMicrometer)
Output
SetTriggerDigitalOut
Complete Command ZenService.HardwareActions.SetTriggerDigitalOut(string port-
Label, bool isSet)
Output
SetTriggerDigitalOut7
Complete Command ZenService.HardwareActions.SetTriggerDigitalOut7(bool isSet)
Output
SetTriggerDigitalOut8
Complete Command ZenService.HardwareActions.SetTriggerDigitalOut8(bool isSet)
Output
SetTriggerDigitalOutRLShutter
Complete Command ZenService.HardwareActions.SetTriggerDigitalOutRLShut-
ter(bool isSet)
Output
SetTriggerDigitalOutTLShutter
Complete Command ZenService.HardwareActions.SetTriggerDigitalOutTLShut-
ter8(bool isSet)
Output
AppendLogLineString (1)
Complete Command ZenService.Xtra.System.AppendLogLineString(string logMes-
sage)
Output string (contains the file path of the data log file)
AppendLogLineString (2)
Complete Command ZenService.Xtra.System.AppendLogLineString(string logMes-
sage, string logFileName)
Output string (contains the file path of the data log file)
ExecuteExternalProgram (1)
Complete Command ZenService.Xtra.System.ExecuteExternalProgram(string exe-
FilePath)
Output
Description Starts an external application (*.exe, *.py, *.mp3, ...). It may be re-
quired to add the location of the application the environmental vari-
ables, usually PATH, in order for windows to find the application. It is
also possible to use the absolute file path.
ExecuteExternalProgram (2)
Complete Command ZenService.Xtra.System.ExecuteExternalProgram(string exe-
FilePath, string arguments)
Output
Description This is similar to the function above but it allows to specify arguments
which can be passed to the application. An example could look like
this: ZenService.Xtra.ExecuteExternalProgram("fiji-win64.exe", "-
macro Open_CZI_OME_complete.ijm Experiment-123.czi")
ExecuteExternalProgramBlocking (1)
Complete Command ZenService.Xtra.System.ExecuteExternalProgram(string exe-
FilePath, string arguments, int timeoutInMs)
Description This advanced method will run an external program and blocks the
subsequent script. When the Experiment Feedback runs in synchro-
nous script execution it also blocks the experiment until the external
application is closed. It requires a parameter called timeoutinMs,
which specifies after which time period the script will continue inde-
pendent from the external applications. As an output it returns a so-
called ExitCode (integer), which is produced by the external applica-
tion when exiting. This ExitCode has to be supported by the external
application. Standard Windows programs (like Notepad, etc.) just re-
turn '0'. When using a special application or self-written applications
it is possible to set a special own ExitCode.
ExecuteExternalProgramBlocking (2)
Complete Command ZenService.Xtra.System.ExecuteExternalProgram(string exe-
FilePath, int timeoutInMs)
Description This is the same method as above but with additional arguments for
the external application. The timeout parameter is still required.
PlaySound (1)
Complete Command ZenService.Xtra.System.PlaySound()
Input
Output
PlaySound (2)
Complete Command ZenService.Xtra.System.PlaySound(string waveFilePath)
Output
RunLoopScript
Complete Command ZenService.Xtra.System.RunLoopScript()
Input
Output
Description This command will trigger a run of the LoopScript. It is only meant to
be used when running the Experiment Feedback script in synchronous
script execution.
WriteDebugOutput
Complete Command ZenService.Xtra.System.WriteDebugOutput(string message)
Input string
Output
Parameter Description
General
- Show Output Opens the output messages field. Here the messages defined below
Messages are displayed during script run for debugging purposes.
Output Messages
Parameter Description
User Defined Mes-
sages
Auto Messages
- Script Run Writes which part of the script (preloop, loop script, post loop script)
is run.
Additional Infor-
mation
- Show Time Allows to additionally display the time stamp for each message.
Stamp
With Guided Acquisition for ZEN you can create an automated workflow to acquire images
(overview), detect relevant objects (image analysis) and re-image these positions, using another
experiment, e.g. with higher magnification, z-stack etc.
Guided Acquisition Workflow:
1. Scan or inspect a large area (or over a long period of time).
2. Perform an analysis to detect interesting objects.
3. Acquire detailed images for every detected object.
A possible application is to detect rare events, e.g. to find transfected cells. For example, the sam-
ple contains many cells that are stained with a blue dye, but only a few are additionally expressing
GFP. Guided Acquisition allows you to find these cells and run another, e.g. high-magnification,
experiment on these positions.
After performing a (low-magnification) overview scan, the image analysis detects all cells and de-
termines which of them are expressing GFP, i.e. show a certain intensity in the GFP channel. Then
the microscope revisits all GFP-expressing cells and performs a second acquisition there, e.g. with
higher magnification, a z-stack, etc. The analysis results are automatically saved to a folder.
For a successful Guided Acquisition experiment, you need to prepare an overview and a detailed
experiment as well as an image analysis setting. If you want to process your overview image be-
fore it is analyzed, you also need a suitable setting for each processing step or function you want
to execute.
Overview scan
You have defined the experiment for the overview scan. Typically, the overview scan is using a
lower magnification in combination with a tile experiment.
Processing setting(s)
You have defined a suitable setting for each processing step or function you want to execute. For
more information, see General Settings [} 83].
Note that if you want to use Shading Correction with a reference image, you have to define your
setting in Batch mode!
Detailed scan
You can perform for example the following experiments:
Parcentricity
If you use different detectors for the overview and the detailed scan, it might be necessary to cor-
rect for the shift between both detectors (to ensure parcentricity). For this you have to take an im-
age at the exact same position with both cameras and then determine the offset between the
two (to ensure parcentricity). The reference to calculate this offset is the image taken with the
camera for the overview experiment. You can then enter the values for the shift in X and Y in the
guided acquisition setup.
Focus strategies
You have several options, to perform a focus strategy.
Detailed scan and overview scan can be defined with their own focus strategy using Focus Sur-
face and/or Software Autofocus.
During the Guided Acquisition experiment, you can define additional focusing steps. They are in-
dependent from the focus strategy defined on the Acquisition tab, see Focus Strategy Tool
[} 928].
For more information on Guided Acquisition, see Performing a Guided Acquisition [} 321].
Info
Auto Immersion
If you set up your Guided Acquisition experiment using an immersion objective and user input
would be necessary between overview and detailed experiments, a dialog is displayed directly
after clicking Start. With this dialog you can control if the immersion is created when chang-
ing the objective for the scan. The experiment runs uninterrupted afterwards.
Prerequisite ü Guided Acquisition is activated under Tools > Modules Manager > Guided Acquisition.
ü You have calibrated the XY stage.
ü You have defined a suitable experiment for overview scans and for detailed scans.
ü You have defined a suitable setting for each processing step or function you want to execute.
ü You have defined a suitable image analysis setting using the Image Analysis Wizard.
ü For more information, see Preliminary Work to Guided Acquisition [} 320].
1. On the Applications tab, open the Guided Acquisition tool.
2. Create a setting to save your experiment setup, see Using Guided Acquisition settings
[} 323].
3. In the Overview Scan section, select the experiment you want to use for creating the over-
view image and the objective and (if available) after-magnification lens to be used. Note
that as default objective and after-magnification are not stored as part of an experiment.
4. If your system is equipped with Definite Focus, the Find Surface and SW Autofocus
checkboxes are visible. Activate the checkbox to perform an additional focusing step before
the overview experiment.
5. If you want to process your overview image before it is analyzed, click Add processing
method.
6. Select a Method for processing and a corresponding Setting.
7. In the Image Analysis section, select a suitable setting to analyze the overview scan and
detect the objects of interest.
à If your setting contains more than one class for analysis, the Class dropdown is dis-
played.
8. Select the Class from the image analysis setting with multiple classes, whose segmented re-
gions should be imaged (e.g. with higher resolution) in the detailed experiment.
9. If you want the detailed scans to stop after a particular amount of objects, activate Add
Stop Criterion for Detailed Scans.
à Additional options to define an automatic stop criterion for detailed scans are displayed.
10. Select the measurement feature that should be used to sort the objects detected by the im-
age analysis and set the order for sorting the results. The drop down list displays all suitable
measurement features as defined in selected the image analysis setting. Additionally, enter
the maximum number of objects that should be scanned by the detailed experiment and
select if it is the overall number of objects or the number of objects per well.
à You have set up a stop criterion for the detailed experiment.
11. In the Detailed Experiment section, select the experiment for the detailed scan, the objec-
tive and - if available - the after-magnification lens to be used. Note that as default objec-
tive and after-magnification are not stored as part of an experiment.
12. If you use different detectors for the detailed and overview experiments, enter a X offset
and Y offset to correct the parcentricity.
13. If your system is equipped with Definite Focus, the checkbox Find Surface is available. Ac-
tivate this checkbox to use the Definite Focus to find z-position of the glass surface once
before the detailed scans. Activate SW Autofocus to perform a SW autofocus once before
the detailed scans. Optionally, activate Recall Focus to store the difference between the
glass surface and the sample and use this value for each of the detailed experiments.
14. Define the folder where you want to store the experiment data.
15. Click Start.
An overview scan is performed and a .czi image is acquired and saved to your folder.
In the example above, the image in the top left shows the overview scan and three identified ob-
jects marked with an orange box. For these positions, a detailed scan is performed with a higher
magnification.
One Overview Scan Regions.csv and one OverviewScan Region.csv are displayed. The OverviewS-
can Region.csv table shows the found objects with ID, Bound Center X Stage [µm] and with Y
Stage [µm], Bound Width, and Bound Height [µm], as well as Image Scene Container
Name and Image Index Scene.
For each detected object a detailed scan is performed. For each object, a *.czi image is acquired
and stored in your folder.
See also
2 Guided Acquisition Tool [} 324]
2 Setting Up a New Experiment [} 51]
Guided Acquisition offers you the possibility to save your whole experiment setup in a settings
file. This file is saved in the folder for your Guided Acquisition experiment together with all the
other settings (e.g. the overview and detailed experiment, the image analysis setting, and the im-
age processing settings, if a processing step was selected).
See also
2 Performing a Guided Acquisition [} 321]
2 Guided Acquisition Tool [} 324]
Parameter Description
Options
- New Creates a new Guided Acquisition setting. Enter a name for the set-
ting.
- Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
- Save As Saves the current setting under a new name. Enter a name for the
setting.
Parameter Description
- Delete Deletes the current setting.
Overview Scan
- Objective Selects the appropriate objective for the overview scan (typically with
a low magnification). If the objective has already been defined in the
experiment, it is read only (greyed out) here.
- SW Autofocus Performs a SW Autofocus before the overview scan and sets the
found z-position as z-position for the overview scan. This step is inde-
pendent from and additional to any focusing strategy defined in the
overview experiment.
Processing Section for processing steps to process the overview image before it is
analyzed.
- Setting Selects the setting for the processing method from the drop-down
list.
Note: If you want to use Shading Correction with a reference im-
age, you have to define your setting in Batch mode!
Image Analysis
- Setting Selects the image analysis setting used to analyze the overview scan.
Parameter Description
- Class Only visible if you have selected an analysis setting where more than
one class is defined.
Selects the class of the image analysis whose regions are used to per-
form the detailed scans.
- Add Stop Cri- Activated: Displays options to define a stop criterion for the detailed
terion for De- experiment.
tailed Scans
- Select Feature Only visible if Add Stop Criterion for Detailed Scans is activated.
Selects the measurement feature that is used to sort the objects de-
tected by the image analysis from the list of suitable measurement
features defined in the image analysis.
- Select Order Only visible if Add Stop Criterion for Detailed Scans is activated.
Selects the order for sorting the results.
- Max. Objects Only visible if Add Stop Criterion for Detailed Scans is activated.
Sets the maximum number of objects that get scanned by the detailed
experiment. You can either define the number of objects for the
whole overview scan (Overall), or the number of objects Per Well.
Detailed Experi-
ment
- Experiment Selects the experiment setup to acquire a detailed scan at the position
of each object detected by the image analysis.
- Objective Selects the appropriate objective for the detailed scan (typically with
high magnification). If the objective has already been defined in the
experiment, it is read only/greyed out here.
- Detector Par- Only visible if the detectors used in the overview and detailed experi-
centricity Cor- ment are different.
rection
Sets a correction for parcentricity for the two detectors. In the two
text fields you can enter the X offset and Y offset of the two differ-
ent detectors in µm.
- SW Autofocus Performs a SW Autofocus before the first detailed scan and sets the
found z-position as z-position for the detailed scan. This step is inde-
pendent from and additional to any focusing strategy defined in the
detailed experiment.
If the detailed experiment is not a tiles experiment the focus strategy
is automatically set to the “standard” behavior, i.e. the selected auto-
focus strategy will be performed on each tile of each detailed experi-
ment or each timepoint for a time-series, respectively. If you want to
use a different strategy, e.g. perform the selected autofocus strategy
Parameter Description
on each n-th tile of each detailed experiment, you have to activate the
Tiles checkbox for the detailed experiment.
Then you can select the desired frequency of autofocus in the Focus
Strategy tool under Stabilization Event Repetitions and Fre-
quency when you switch to Expert mode.
- Recall Focus Only visible if Definite Focus is licensed and if both Find Surface and
SW Autofocus are checked.
Restores the saved focus position and applies it for each detailed ex-
periment.
Output Folder
- Output Folder Determines the folder where the analysis results are saved.
A subfolder will automatically be created for each run of a Guided Ac-
quisition experiment.
Note: When a ZEN Connect project is open, the output folder cannot
be set here, but is defined by the ZEN Connect project.
Note: When using Guided Acquisition with Direct Processing, make
sure that the processing computer has access to this output folder!
Open output Activated: Opens the output folder after Guided Acquisition is fin-
folder after execu- ished.
tion
See also
2 Preliminary Work to Guided Acquisition [} 320]
2 Performing a Guided Acquisition [} 321]
2 Software Autofocus Tool [} 333]
HDR Confocal Basic enables the acquisition of image data with extended dynamic range by auto-
matic acquisition and combination of images with different excitation intensities.
5.5 Multi-Channel
Multi-Channel enables you to acquire fluorescence and transmitted light images in independent
channels with a technically unlimited number of independent channels for reflected light and
transmitted light techniques. It provides a fully automatic generation of the required microscope
setting for a channel with the possibility of adjusting the setting manually for the channel. It sup-
ports simultaneous acquisition of two channels using two synchronized cameras and offers inde-
pendent exposure times and shading corrections for each channel.
This module offers a configurable image based autofocus functionality that will search through a
series of axially stepped images analyzing the “sharpness” of each. The z-value of the image re-
turning the maximum sharpness is set as the new plane of observation.
The module requires the microscope to be fitted with a motorized Z-drive. It does not require a z-
piezo actuator nor is the z-piezo used by the software autofocus (SWAF) in the current implemen-
tation. On the Acquisition or Locate tab the settings for the SWAF can be adjusted in the Soft-
ware Autofocus tool. These can also be called (and tested) by clicking the Find Focus (AF) button
on the main button bar on Acquisition tab. SWAF settings are stored as part of an experiment on
Acquisition tab.
The configuration allows the function of the SWAF run to be matched to the conditions under
which the focus should be found. A basic description of the functions adjusted by the individual
controls can be found further below. However, before going into the description of each parame-
ter, we will try to address the following questions: How does the SWAF in the software attempt
to locate the “focus”? And how do the parameters settings influence its behavior in this respect?
Perhaps the best place to start is with an explanation of the terms encountered when working
with focus strategies before looking at the individual strategies in detail. Many of these terms are
also encountered in the Tiles and Software Autofocus module. The nomenclature takes some
time familiarize with due to its subtleties. Here is a list of the more common terms:
Term/Abbreviation Description
SWAF Stands for Software Autofocus.
DF,DF.1 or DF.2 Stands for Definite Focus, Definite Focus.1 or Definite Focus.2
Tile One of the individual image fields that make up a tile region i.e. a 2x2
Tile region is made up of 4 tiles arranged as a grid. The tiles have a
given overlap with their neighbors (default setting 10%) allowing
them to be stitched together as one image if necessary. Unless other-
wise specified by a focus strategy, each tile has the same z-value as
the parent Tile region. After acquisition, the individual tiles are dis-
played together as part of the tile region to which they belong, which
in turn makes up one scene.
Tile Region In a tile experiment a tile region refers to an ordered group of individ-
ual image fields (or tiles) that belong together and are arranged in the
form of a grid (these arrangements can be based on quadrilaterals,
circles, ellipses or freehand polygons) with a predefined overlap (de-
fault 10%) to facilitate stitching the images together. With the help of
tile regions it is possible to acquire areas with dimensions that vastly
exceed the size of an individual image field. Within an experiment a
number of tile regions can be acquired at various localities/ wells/ con-
tainers on the sample. Each tile region is based on an X and Y coordi-
nate of the stage and a Z coordinate of the focus drive and are de-
fined using the Tiles tool. After acquisition, the individual tile regions
are displayed as scenes to facilitate viewing.
Term/Abbreviation Description
just one tile. Each position is based on an X and Y coordinate of the
stage and a Z coordinate of the focus drive. Individual positions or po-
sition arrays (grouped individual positions) are defined using the Tiles
tool. After acquisition, the individual positions are displayed as scenes.
Reference Channel The channel selected as a reference z-value for focus strategies and
events in particular a SWAF. The selected reference channel can be
changed in the Reference channel expander or in the Channels tool. It
is also possible to define a relative axial offset to the reference chan-
nel. This can be done for one or more other channels.
Focus Surface Refers to the interpolated surface of z-values derived from support
points (discrete z-values) defined by the user (or by functions such as
SWAF or DF.2) prior to the experiment (or immediately before acquisi-
tion start). A focus surface can be “local” or “global”. The local form is
confined to a single Tile region and attempts to describe the sample
topography covered by the tile region such that all its image fields
(tiles) will be in focus. The global surface form is technically identical,
but is associated with a sample carrier, and defined in the sample car-
rier template dialogue. Thus, tile regions or positions placed on this
carrier will follow the slope or contour defined by a topography that
covers part or most of the sample carrier. In both cases the surface is
defined by interpolation from discrete z-values – so called “support
points”. Note that a positions z-value is used as its local surface, and
as such does not require a support point. Global and local surfaces
cannot be mixed in a single experiment (or block).
Support Point To create a focus surface it is necessary to define one or more support
points. Support points are user defined collections of z-values that
correspond to the desired plane of observation at a given XY-coordi-
nate. They can also be defined initially by a SWAF run or DF.2 recall
focus function- initially, after the experiment is started, but before the
first loop of images are acquired. The number of support points, de-
fined by the user, can be distributed automatically by an algorithm, re
arranged individually by hand or placed at the current stage position.
The number of support points employed determines the degree of in-
terpolation that can be used to generate the topography of the focus
surface. Typically, the interpolation criteria (minimum number of sup-
port points required to generate a certain degree) should be over
filled with support points, and a lower interpolation degree selected
for more robust results. By default the software employs an interpola-
tion degree of level 2 (which can generate a parabolic saddle surface
with at least 9 support points). If too few support points are used the
next lower level (a “tilted plane”) will be use automatically. Higher in-
terpolation degrees have to be manually selected, but for most use
cases are typically not necessary.
Z-value The current Z coordinate of the focus drive that is used to define a
Tile region, position or support point when it is created by the user.
Note that the individual tiles of a tile region all have the same initial z-
value unless support points are used either in the context of a local or
global focus surface, a software autofocus is used to determine them
individually or a definite focus stabilization adjusts them. The z-value
of a position defines its z-coordinate initially when a local focus sur-
face is used. Positions spread on a global focus surface (carrier based)
are adjusted accordingly as are the individual image fields of a tile re-
gion.
Term/Abbreviation Description
Adapt Z Values/Fo- The focus strategy Use Z Values/Focus Surface defined in Tiles
cus Surface Setup allows the Focus surface or z-values defined in the Tiles tool to
be modified by the result of a SWAF or DF stabilization based on
these initial values. These functions are not available when no SWAF
module is present or no DF is configured. The function has several
module/hardware dependent variations:
- As Additional Ac- In focus strategies that use a focus surface or z-value defined by the
tion Tile setup (tool) it is possible to optionally execute a so called “addi-
tional action” (a stabilization event) that adapts the focus surface/ z-
values. This occurs after the reference z-value has been reached as de-
fined in the tiles set-up for each discreet z-value (i.e. each tile/position
or the defining focus surface). Depending on the system configuration
this can be a SWAF run or a DF stabilization. In the case of a SWAF
run the initially defined reference z-value is used to center the search
range defined in the SWAF settings. Thus, a SWAF run can be cen-
tered on the sample topology increasing the effectiveness and/ or
speed at which a maximum is detected and subsequently used for im-
age acquisition. In certain applications, such as Correlative array to-
mography (CAT) this function can be performed with DF instead. In
this case a local focus surface is used to make sure that the DF stabi-
lization stays within the catchment range of the device (only impor-
tant for DF.1!). Complimentary to this is the number of support points
needed to initially define the surface can be significantly reduced for a
large elongated Tile region - which greatly reduces set-up time to im-
age the extremely thin (typically 70 nm thick or less) “ribbon” of serial
sections.
- Update with Sin- In combination with a Definite Focus or SWAF if a time series is used
gle Offset it is possible to make use of a focus surface or z-value defined by the
Tiles setup and execute a so called “update” (a stabilization event) –
this makes use of a SWAF run or a DF stabilization to update the Fo-
cus surface/z-value defined initially by the Tiles set-up. In a time series
the update action is performed once each time point (or every nth) at
a single discrete “wait position” (default center of 1st Tile region / po-
sition). A change in Z (thermal or residual focal drift) at the wait posi-
tion - if detected - is then applied to all the focus surfaces or Z-values
defined in the Tiles Setup (adapting them all by the change in Z, ap-
plied as a common offset). In some cases it is useful to be able to de-
fine a specific waiting position – for example, when a special sample
carrier is used where the DF reflex signal might be disturbed by its
structure/optical properties at the first tile region/position. Alterna-
tively, if using a SWAF some kind of fiducial marker or such is avail-
able at this position that does not change (e.g. bleaching or move-
ment) can be used.
- Update with Mul- For Definite Focus.2 only an additional function is available that al-
tiple Offset lows the device to be initialized on each and every z-value prior to the
experiment and hence stabilize and update these individually accord-
ing to their location relative to the sample/ glass interface. This func-
tion can be used with or without a time series dimension. Thus, DF.2
enables true multi-location experiments in which the user defined z-
values (including support points) are used by DF.2 to create a stabi-
lization map that is monitored and updated throughout the experi-
ment.
Term/Abbreviation Description
Initial Definition This function allows you to select how the initial z-values used in the
for Z Values/ Focus experiment are defined. By default this is By Tiles Setup and the z-
Surface values specified there (in the Tiles tool) are used. However, it is possi-
ble to define or adjust these z-values directly before the experiment
(after clicking Start Experiment) either with a SWAF run or a with a
DF.2 Recall Focus (Axio Observer). For the Celldiscoverer 7 this drop
down offers the additional options Find Surface or Find Surface +
Additional Offset to define the initial z-values. In this case the z-val-
ues are initially defined by the z-values resulting from this “pre-run”
before the imaging loop starts. This can be particularly useful when
working with multi well plates or chamber slides where the sample is
located at a similar position relative to the carrier surface in each well
or chamber. It also allows the imaging loop of the experiment itself to
be speeded up and to be run in a triggered or compromised protocol
(fast acquisition) thus reducing the time to complete the imaging loop
of the experiment.
Stabilization Event Defines the frequency and repetition of stabilization events within a
Repetitions and given focus strategy. For the DF and SWAF focus strategies you can
Frequency determine when and where in the experiment these events are exe-
cuted in synchrony to the imaging loops – a loop here means time se-
ries, or positions for example, with the event synchronized to occur
immediately prior to the chosen loop. A general limitation of this im-
plementation (to limit code complexity) is that these stabilization
events can only be synchronized to iterate with a single imaging loop
entity i.e. the selection is only possible in a mutually exclusive manner.
These settings can be accessed only when "Show all" is activated and
expert mode is selected. Initially default settings are assigned and can
be restored by clicking the "Standard" button. In "Expert" mode the
settings are displayed and can be, if necessary, modified. Depending
on the dimensions of the experiment or focus strategy different pa-
rameters can be modified to meet the experiment needs. For the Tile
Region loop you can optionally select where the event occurs within
the Tile region - either in the center or at the 1st Tile of the region
(typically upper left hand corner). This is of use when using SWAF
events as often the upper left hand corner of a Tile region might not
contain sample, thus often the SWAF run will not return a suitable
maxima (new z-value). Finally, focus strategies that include Definite
Focus and are used with a time series dimension may also allow stabi-
lization during the interval of the time series i.e. asynchronous to the
imaging loops of the experiment. This might be necessary if the time
interval is on the order of tens of minutes, or if a large thermal drift is
expected (more significant for DF.1), or if the time series has no or a
very short interval (i.e. fast as possible acquisition at a single position)
allowing synchronized events to be disabled completely.
Focus Surface Out- Under Tools > Options > Acquisition > Tiles you find the option
lier Enable Removing of Focus Surface Outlier. By two parameters
you can define how so called “outlier” values are handled prior to cal-
culation (interpolation) of a focus surface. This is particularly helpful
when the z-values that will be used for this purpose contain one or
more values that differ obviously from the others (for example if a
SWAF run has returned a z-value that does not lie close the sample
plane of interest). If not removed such values locally distort the focus
surface potentially producing “blur” in the resulting images. By default
a linear fit is used to detect such outliers in combination with a statis-
tical threshold value (sigma). Values that do not meet these criteria
Term/Abbreviation Description
(i.e. are significantly outside this) are classified as outliers and are not
used to calculate the focus surface that will be subsequently gener-
ated for the experiment. In extreme use cases it is possible to modify
the sigma value or use a mean value instead of a linear fit for this pur-
pose, but typically these default values never need to be changed.
In microscopy, the focus can be implied from image parameters, such as the contrast or intensity,
that vary with the position of the objective’s plane of observation in the sample and the level of
detail at a given plane. However, an algorithm that tries to detect (and maximize) such values will
only return an axial position that corresponds to a plane of interest if these coincide (which is typi-
cally the case with (thin) samples with a singular discrete plane of detail).
This becomes increasingly difficult with higher numerical aperture (NA) lenses, thicker samples,
and less pronounced levels of detail (modulated as change in contrast or intensity in the resulting
image). Hence, SWAF is not to be understood as a focus finder, but can be used as a method for
reliably searching over a given axial range and locating such a plane in a sample. Thus, although
not all samples and imaging conditions will be appropriate, SWAF is an approach that allows a
useful detection of a focus plane as a start for further imaging activities.
Basically, the SWAF searches, with a pre-set z step size, within a given range of z values for the
image plane that returns the maximal “sharpness” value. The step size or sampling rate of the
SWAF is determined by the objective NA and wavelength (more details are given below). In turn
the (automatic) search range is also largely determined by the objective NA – obviously optics dic-
tate that higher NA objectives have smaller search ranges and vice versa. For SWAF to be useful
for the application in question the image plane that returns the maximal sharpness should ideally
be equivalent to the plane of interest in the sample – i.e. thus sample characteristics determine
whether SWAF is the appropriate method to reliably detect the desired plane of observation. The
component algorithms and functions of the SWAF, their relationships and the basic SWAF work-
flow are visualized schematically in the image below:
Parameter Description
Mode Here you can select the sharpness measurement mode.
Parameter Description
On the other hand, for optical sectioning methods (e.g. Spinning
Disk) the software will automatically select an Intensity based
approach to determine the sharpness values.
If the microscope configuration can`t be detected automatically
you can manually select the sharpness measurement mode.
- Reflex Only available for LSM systems with imaging tracks other than
MPLX and Airyscan SR. Using those tracks might lead to overex-
posure of the channel and failure for the Autofocus.
If activated, the reflex of the laser on the cover glass surface is
detected.
An offset is used to focus onto the sample. You need to add the
offset once by clicking the Find Offset button while the sample
is in focus. The method requires a refractive index mismatch be-
tween the immersion medium and the cover glass and is there-
fore not for oil immersion objectives.
For Reflex mode, the Search parameter should ideally be set to
Smart. After the offset was defined, you can manually reduce
the search range in order to facilitate shorter focusing times. The
acquisition settings (e.g. laser line, PMT gain, emission filters) are
configured automatically.
Quality This parameter determines the merit function that will be used to
calculate the contrast value of the image when Contrast mode is
used by the SWAF (Software Autofocus) run to measure sharp-
ness.
- Low Signal If selected, a single merit function to determine the value is used.
Use this setting if the image is noisy or the sample covers a small
area of the field of view. As might be the case if you work with a
calibration slide or beads.
Search There are two options Smart and Full. These define a different
type of primary maximizer used to run the SWAF, which in turn
determines a number of additional characteristics and parame-
ters of the entire process. To learn more about this, read the FAQ
entry [} 1320].
Parameter Description
- Smart If selected, an alternative maximizer is used that can search in a
bidirectional manner and will stop when a local maximum is
found in the sharpness values (i.e. a significant decrease of
sharpness in both z directions). Again if an error condition is de-
tected the Smart maximizer will throw an exception.
For specific information on the Software Autofocus using LSM
Tracks in this context, also refer to chapter Software Autofocus
Using LSM Tracks [} 336].
- Full If selected, the maximizer employed with this setting uses a uni-
directional movement of the z-drive stepping through the entire
relative or fixed search range defined in the SWAF tool (see Aut-
ofocus Search Range). The Full maximizer will return a global
maximum for the autofocus run or throw an exception when an
error condition is detected.
For specific information on the Software Autofocus using LSM
Tracks in this context, also refer to chapter Software Autofocus
Using LSM Tracks [} 336].
Sampling Here you can select the step size of how the search range is sam-
pled.
- Fine Uses a small Z-distance (0.5 * dz) between the individual focus
images that are used to calculate the best focus position.
This doubles the number of z-slices for the given range.
- Coarse Uses a large Z-distance (4 * dz) between the individual focus im-
ages that are used to calculate the best focus position.
Reduces number of z-slices by a factor of four.
Autofocus Search Here you can switch between two distinct approaches for the
Range autofocus search range:
- Automatic Range Activated: Calculates the range for the autofocus search auto-
matically depending on the objective set.
- Step Size Shows the distance between the individual focus images set un-
der Range.
Parameter Description
To learn more, read Fixed Autofocus Search Range.
- Set Last Defines the current Z-position as the end (last) point for the soft-
ware autofocus. Alternatively, you can enter the desired value in
the input field to the left of the button.
- Range Displays the area which is used for the autofocus search. Adopt
the area via the Set Last/Set First buttons or the input fields.
- Step Size Displays the selected Sampling distance between the individual
focus shots.
- Set First Defines the current Z-position as the start (first) point for the
software autofocus. Alternatively, you can enter the desired
value in the input field to the left of the button.
Autofocus ROI Here you can define a Spot Meter or Focus ROI such that the pix-
els evaluated by the SWAF are limited to a user defined region of
the image.
This is particularly usefully if you use a fiducial marker such as a
speck of dirt or other constant artefact that serves as a reference
for the sample focus at a fixed position over time.
The autofocus ROI is displayed in the live image of the sample
enabling it to be positioned and resized, as necessary. As an ad-
ditional aid to help focusing you can also use the focus bar func-
tion of the live image that monitors the image contrast in the live
image or Spot Meter ROI.
- Spot Meter / Focus Activated: Only uses the values from the Spot Meter / Focus-ROI
ROI to calculate the focus position.
The Focus-ROI is displayed in the live image as a red dashed rec-
tangle. You can adopt it by clicking on the frame and changing
its size and position.
Note that this option is not available for LSM acquisition.
Confocal Tracks are also suitable as reference Channels for the Software Autofocus. As LSM ac-
quisition is by design slower compared to Camera acquisition, some optimizations are done in
the background in order to speed up the focusing action.
The typical measure for the correct focus position in confocal images is the intensity. Hence the
aim of the SWAF is here not to generate images of a certain quality, but only to evaluate relative
image intensities along the z-stack. This allows us to use very coarse scanning parameters.
Generally, the SWAF for LSM uses a fixed Frame Size of 64*64 pixels in combination with the
fastest possible scan speed at the currently configured zoom. To further speed up the acquisition,
bidirectional scanning is used. Whatever Laser power you specify in the Channels tool window
for this Track is used during the SWAF action. Of note, while you assign a reference Channel, the
corresponding Track with all its channels will be active during the focusing.
Some behavior depends on the selected Search Mode Full or Smart.
In the Full search, the system will use the detector gain as configured in the Channels tool win-
dow.
In contrast, the Smart search aims to start close to the likely intensity maximum of the z-Stack.
This focus position is approximated by a fast line z-stack in the center of the image frame. As the
line scan generates less pixels and a higher noise level, a useful dynamic range needs to be en-
sured. To this end, the fast line z-stack is repeated several times with increasing PMT gain. After
the line scan, regardless if an intensity peak was found or not, a frame wise autofocus will follow.
In case no peak could be identified, e.g. because of a sparsely distributed sample, the Smart
search will start at the original z-position and not optimization of the starting position will take
place. Essentially, the Smart search will outperform the Full search on high Search Ranges and
a highly varying effective focus position.
If the focus fluctuations are predictably small, a narrow Search Range in combination with a Full
search might be faster. As a final remark, Camera-based Autofocus can be time-saving, especially
when the search range needs to be large and a Full Search is required. While Camera and LSM
cannot be combined into one image document, the deactivated camera Track may be still be used
as a reference Track.
This module enables you to acquire high-resolution images through automatic scanning of prede-
fined tile regions and positions of a sample. The regions of tile images and individual positions
can be combined freely and a motorized stage allows automatic scanning of specimens. Overlap-
ping individual images can be combined into an overview image using “stitching” algorithms. The
module supports the use of sample carriers, focus correction maps, stitching and shading correc-
tion and is compatible with Software Autofocus. Some functionality is generally available, for
the full set of features you need the dedicated license for the module, see Licensing and Func-
tionalities of Tiles & Positions [} 338].
Info
Acquiring Tiles with different z
If you want to acquire tile regions or positions with different z-positions, you need to use a
suitable focus strategy, see Using focus strategies [} 63].
Info
Automatic Focus Strategy Selection
When you activate the Tiles tool in any experiment for the first time, the software automati-
cally selects the suitable focus strategy Use Z Values/ Focus Surface defined in Tiles Setup.
This focus strategy is only available if you have licensed the Tiles & Positions module and it
can be optimized with the Focus Strategy Wizard in the Focus Strategy tool, see Focus
Strategy Wizard [} 933] and Focus Strategy Tool [} 928]. In the case that you do not have
the Tiles & Positions module, ZEN still selects the appropriate focus strategy for you.
1 2
1 Tile Regions
2 Position-Array
Some basic functionality for tiles and position experiments is generally available in the software,
but the full Tiles & Positions functionality requires a license.
Basic functionality
The functionality generally available in ZEN includes:
Licensed Functionality
If you have licensed the module Tiles & Positions and activated it in Tools > Modules Man-
ager, the additional functionality includes:
§ The Advanced Setup functionality to set up experiments more easily, including the possibility
to copy and paste tile regions and positions.
§ The use of sample carriers for your tile or position experiments, including the functionality to
generate and use your own sample carrier templates.
§ The suitable focus strategy Use Z Values/ Focus Surface defined in Tiles Setup and the
Focus Strategy Wizard to guide you through the setup of a focus strategy for the best fo-
cus.
§ The creation and use of local and global focus surfaces.
§ The possibility to create and use custom categories for your tiles and positions.
Prerequisite ü The Tiles module is activated under Tools > Modules Manager... > Tiles & Positions.
ü To set up Tiles experiments, you require a motorized stage. This must be configured and cali-
brated correctly in accordance with the camera orientation. For more information read Cali-
brating the Stage and Selecting the Channel [} 343].
ü You are on the Acquisition tab.
ü You have created a new experiment [} 51], defined at least one channel [} 51] and cor-
rectly set the focus and exposure time.
1. Activate the Tiles checkbox in the Acquisition Dimensions section to display the Tiles
tool.
à In the Left Tool Area the Tiles tool appears in the Multidimensional Acquisition tool
group.
You have successfully completed the general preparations. You can now continue with the next
steps of this guide.
See also
2 Setting Up a Simple Tiles Experiment Without the Tiles & Positions Module [} 339]
5.7.3 Setting Up a Simple Tiles Experiment Without the Tiles & Positions Module
7. Save the experiment. To do this, in the Experiment Manager click and select Save
As. Enter a name for the experiment in the input field (e.g. Simple Tile Experiment).
8. Click Start Experiment.
à The Tile Region experiment is acquired.
à The individual tile regions are displayed in the acquired file as scenes and can be selected
using the Scene slider on the Dimensions tab. If you deactivate the Scene checkbox, all
tile regions are displayed as an overview.
You have successfully set up and acquired a simple Tile Region experiment.
Info
Z values
To ensure that the individual z values of the tile regions are taken into account, ZEN automati-
cally selects the most appropriate focus strategy when the checkbox Tiles is activated. For the
experiment described here no further modification needs to be made. If you want to acquire
all tile regions at the same z-position, then you have to select None from the dropdown list in
the Focus Strategy tool. The individual z-positions are then ignored and the current z-position
at the time the experiment is started is used for all tile regions.
Info
Shortcut
You can also add a predefined tile region at the current stage position by pressing the F9 but-
ton on your keyboard. The size of this region is the last defined number of tiles in x and y, or a
square of 3x3 tiles if you have never defined a region before.
5.7.4 Setting Up a Simple Positions Experiment without the Tiles & Positions Module
2. Start the Live mode to use the stage to locate a position that you want to acquire.
à The X and Y coordinates of the current position are displayed in the Current X/Y display
fields.
3. Bring the specimen into focus using the focus drive.
4. Click .
à The current position is added to your experiment.
à If you are close to a position that you added previously the software will ask you if you
really want to add another position at this location (the threshold for this lies within a cir-
cle whose radius is less than half the approximate width of the cameras visible field).
5. To add further positions, move the stage to another position on the sample and repeat the
previous steps.
à The added positions are shown in the list in the Single Positions section with their X, Y
and Z-coordinates.
6. Save the experiment. To do this, in the Experiment Manager click and select Save
As. Enter a name for the experiment in the input field (e.g. Simple Tile Experiment).
7. Click Start Experiment.
à The Positions experiment is acquired.
à The individual positions are displayed in the acquired file as scenes and can be selected
using the Scene slider on the Dimensions tab. If you deactivate the Scene checkbox, all
positions are displayed simultaneously as an overview.
You have successfully set up and acquired a Positions experiment.
Info
Z values
To ensure that the individual Z values of the positions are taken into account the software au-
tomatically selects the most appropriate focus strategy when the checkbox Tiles is activated.
For the experiment described here no further modification needs to be made. If you want to
acquire all positions at the same Z-position, then you have to select None from the dropdown
list in the Focus Strategy tool. The individual Z-positions are then ignored and the current Z-
position at the time the experiment is started is used for all positions.
Info
Shortcut
You can also add a single position at the current stage position by pressing the F10 button on
your keyboard.
If you add positions or tile regions, the current Z-value is automatically adopted for the tile region
or position.
§ Learn about how to adjust and verify the Z-values of positions in the chapter Adjusting Z-
Values of Positions [} 342].
§ Learn about how to adjust and verify the Z-values of tile regions in the chapter Adjusting Z-
Values of Tile Regions [} 341]. Note that the Z-values defined here are valid for all tiles in the
respective tile region.
Info
To ensure that the individual z values of the tile regions and/or positions are taken into ac-
count, ZEN automatically selects the most appropriate focus strategy [} 63] when the check-
box Tiles is activated. For the experiment described here no further modification needs to be
made. If you want to acquire all tile regions at the same z-position, then you must select None
from the dropdown list in the Focus Strategy tool. The individual z-positions are then ignored
and the current z-position at the time the experiment is started is used for all tile regions. In
the case that you do not have the Tiles & Positions module, ZEN will still select the appropri-
ate focus strategy for you.
See also
2 Creating a Local Focus Surface [} 351]
Prerequisite ü You have set up a Tiles experiment with at least one tile region.
1. To check the z-value of tile regions, open the Tile Regions section in the Tiles tool.
à The z-values of the tile regions are displayed in the Z column of the list.
2. Double-click on the list entry of the tile region that you want to check.
à The stage automatically locates the center of the tile region and the associated z-posi-
tion.
3. Use the Live mode to check the z-value of the tile region.
4. To adjust the z-value, set the new z-position with the Focus tool.
5. In the Tile Regions list, click on Options and select Set Current Z For Selected
Tile Regions. Alternatively, in the Tile Regions list, right-click the tile region entry and se-
lect Set Current Z For Selected Tile Regions.
6. To check further tile regions, repeat steps 2 to 4.
7. To check and adjust large number of tile regions, click on the Verify button.
à The Verify Tile Region dialog opens. There you have an interface for the verification
process of each tile region.
8. Click on Close after you have verified all tile regions.
You have successfully checked and adjusted the individual z-values for the tile regions.
See also
2 Introduction [} 63]
Prerequisite ü You have set up a tile experiment with at least one position.
ü You are on the Acquisition tab in the Tiles tool.
1. To check and adjust the z-value of positions, open the Positions section.
à The z-values are displayed in the Z column of the Single Positions list.
2. Double-click on the list entry of the position that you want to check.
à The stage automatically locates the position.
3. Use the Live mode to check the z-position of the position.
4. To adjust the z-value, set the desired position using the focus drive.
5. In the Single Positions list, click on Options and select Set Current Z For Se-
lected Positions. Alternatively, in the Single Positions list, right-click the position entry
and select Set Current Z For Selected Positions.
6. To check and adjust a large number of positions, use the Verify Positions dialog.
7. To do this, click on the Verify Positions button in the Positions section.
à The Verify Positions dialog opens.
8. Select the Helper Method you want to use. This will support you in determining the z-val-
ues. The options are Autofocus (AF) and Definite Focus (DF). If you have neither then
you can only adjust z-values manually.
9. Click on the Move to Current Point button.
à The stage moves automatically to the position in the list that is highlighted in blue. Alter-
natively, you can double-click on the position in the list that you want to check.
10. In the Live mode use the Focus (or SW Autofocus) tool to adjust the desired z-value.
11. Click on the Set Z and Move to Next button.
à The position is marked with a check mark.
à The stage moves automatically to the next position in the list.
12. Repeat the last 3 steps until you have checked all the points in the list.
à The message All points have been verified appears.
13. Close the Verify Positions dialog.
You have successfully verified and adjusted the individual z-values for positions.
See also
2 Introduction [} 63]
On startup of a system with motorized stage and/or focus a request will appear asking if the com-
ponents should be driven to the end switches and calibrated. This ensures that you begin working
with absolute coordinates in this session with the microscope. If the microscope power is cycled,
then this process should be repeated. This function is of particular use if you continually work
with a sample carrier e.g. 96 well plate, of the same format mounted in the same manner repeat-
edly with a given experiment template. If you perform a carrier calibration once with a calibrated
stage, then the carrier calibration is essentially always valid. This is done in the Sample Carrier
section of the Tiles tool. Note these features are only available if you have a license.
Info
The request to calibrate stage and focus on startup can be activated/deactivated under Tools
> Options > Startup/Shutdown > Stage/Focus Calibration.
Prerequisite ü You licensed the Tiles & Positions module and activated it in Tools > Modules Manager.
1. Put your Sample Carrier on the stage.
2. Go to the Acquisition tab.
3. Select a low magnification objective (e.g. 10x) from the Microscope tool in the Right Tool
Area.
4. Click Live and find your focus area either using transmitted or fluorescence light.
5. In the Stage tool of the Right Tool Area, activate the Show all mode and then click Cali-
brate.
6. Check if the alignment and orientation of your camera and joystick is correct by dragging
the software joystick up, down, left and right and observe whether the movement of your
image corresponds to movement of the circle.
The alignment of the camera is correct if the movement of a given stage axis is congruent
to the corresponding axis of the image. An offset in the alignment will be seen as a saw
tooth pattern along the edge of a tiled image (e.g. 4x4 tile region).
In addition, check whether the image movement also corresponds accordingly when you
move the hardware joystick of the stage.
7. If the orientation of the camera to the software joystick (stage tool, right tool area) is incor-
rect, go to the Camera tool, activate the Show all mode, and click Model Specific. The
orientation of the camera (image) can be adjusted by flipping, rotating, or mirroring.
8. Alternatively, you can and may also need to invert the x- and y-axis of your stage in the
MTB in order to align the hardware joystick and the software-controlled stage movement.
9. Ensure that all the prerequisites (e.g. channel and camera settings) for a Tiles & Positions
experiment on your sample are fulfilled. If necessary, use the Smart Setup for the setup.
10. After you have defined at least one channel (e.g. EGFP), activate the Tiles checkbox.
11. If you wish to work with a sample carrier, complete the following steps.
12. Open the Tiles tool and activate the Show all mode.
13. In the Tiles tool, open the Sample Carrier section.
14. Click Select....
15. Select a sample carrier template and click OK.
Advanced Setup makes it easier for you to create tile regions and positions by displaying the dis-
tribution and dimensions of tile regions and positions in the travel range of the stage. You can
generate a Preview Scan and draw in tile regions or positions precisely on the basis of this tem-
plate. For the preview scan you have the option of using an objective with a lower magnification
and/or a different channel (e.g. transmitted light).
Info
To ensure that the individual z values of the tile regions and/or positions are taken into ac-
count, ZEN automatically selects the most appropriate focus strategy [} 63] when the check-
box Tiles is activated. For the experiment described here no further modification needs to be
made. If you want to acquire all tile regions at the same z-position, then you must select None
from the dropdown list in the Focus Strategy tool. The individual z-positions are then ignored
and the current z-position at the time the experiment is started is used for all tile regions. In
the case that you do not have the Tiles & Positions module, ZEN still selects the appropriate
focus strategy for you.
Prerequisite ü To set up tiles experiments in Advanced Setup, you need the Tiles & Positions module.
ü You have read the general introduction, see Tiles & Positions [} 337].
ü You are on the Acquisition tab in the Tiles tool.
1. Click Show Viewer.
à The Tiles Advanced Setup view opens. For more information, see Tiles Advanced
Setup [} 372].
à The live mode is activated automatically. Deactivate the live mode if you do not need it
to prevent bleaching of the sample. To do this, click on the active Stop button in the
Left Tool Area. This default behavior can be changed in Tools >Options > Acquisition
> Tiles & Positions.
Prerequisite ü You have selected an objective with a relatively low magnification in your experiment settings.
ü You are in Advanced Setup.
1. In the left toolbar, click on Preview Scan.
à The Preview Scan Toolbar [} 376] is displayed as the top toolbar.
2. In the top toolbar, deactivate Use Existing Experiment Settings and select/ unselect the
channels that you want to use for the preview scan.
3. If necessary, use the live mode to adjust the focus area and exposure following a change of
objective or channel.
4. To obtain a better overview, slightly zoom out of the Advanced Setup view.
5. Start the Live mode to use the stage to locate approximately the center of the region for
which you want to generate a preview scan.
6. In the left toolbar [} 375] in the Tiles section, click on the Setup by contour button.
à The Contour Toolbar [} 377] is displayed as the top toolbar.
7. In the top toolbar, select the Rectangular Contour tool.
8. In the Stage View, use the tool to draw a rectangle that approximately encloses the region
for which you want to generate a preview scan.
9. Alternatively, in the Stage View, use the live image to navigate to the edge of the area that
you want to image. We recommend, for example, the bottom left corner. Add the first
marker. Now navigate to the upper edge of the object and add a second marker. Finally, if
necessary, navigate to the right edge and add a third marker and click on Done. You can
optimize the tile region by selecting a circular contour shape for your sample instead.
à A tile region is created for the marked region and displayed in the list in the Tile Regions
section of the Tiles tool.
10. With the help of the Live mode, check whether the desired image region is covered by the
tile region. To do this, use the stage to locate the corners and edges of the tile region and
increase or reduce the yellow selection frame as necessary.
11. In the left toolbar, click on Preview Scan and in the top toolbar click on Start.
1. In the Tiles Regions section, deactivate the preview tile region (TR 1) by deactivating the
checkbox of the corresponding list entry. This prevents the acquisition of the preview tile
region during the actual experiment.
2. In the Microscope Control tool in the Right Tool Area, select the objective you want to
use for final acquisition.
3. Use the Live mode to adjust the focus area and exposure accordingly.
You can now continue setting up the tile experiment.
See also
2 Channels Tool [} 914]
2 Tiles Advanced Setup [} 372]
2 Tiles & Positions with Advanced Setup [} 344]
Prerequisite ü You have generated a preview scan [} 345] that will help you to position the tile regions
more easily.
1. In the left toolbar [} 375] in the Tiles section, click on the Setup by contour button.
à The Contour Toolbar [} 377] is displayed as the top toolbar.
2. In the top toolbar, select the desired contour tool.
3. Use the contour tool in the stage view to draw in the tile regions you want to acquire.
à Tile regions are created for each marked region. They are added to the list in the Tile Re-
gions section of the Tiles tool.
You have successfully created tile regions in Advanced Setup.
1. In the left toolbar [} 375] in the Tiles section, click on the Setup by predefined button.
à The Predefined Toolbar [} 377] is displayed as the top toolbar.
2. In the top toolbar, choose how many tiles in x and y dimension you want to add.
3. Click Add Tile Region to add the respective tile region.
You have now created tile regions in the Advanced Setup.
Info
Shortcut
You can also add a predefined tile region at the current stage position by pressing the F9 but-
ton on your keyboard. The size of this region is the last defined number of tiles in x and y, or a
square of 3x3 tiles if you have never defined a region before.
Prerequisite ü You have selected and calibrated a sample carrier with one or more wells/ containers.
1. In the left toolbar [} 375] in the Tiles section, click on the Setup by carrier button.
à The Tiles Carrier Toolbar [} 379] is displayed as the top toolbar.
2. In the Carrier tab, select the individual wells for which you want to create tile regions by
pressing the Ctrl key and clicking on the desired wells.
3. In the top toolbar, select Fill Factor and enter the desired value in the Fill Factor input
field.
4. Click .
According to the selected Fill Factor, the wells will be filled with a calculated number of tiles that
are located around the center. To create a given size of tile region, use the Columns/Rows func-
tion in a similar manner.
Info
Stage movement
If you only double-click one well , the stage moves to the center of that well.
1. In the left toolbar [} 375] in the Positions section, click on the Setup by location button.
à The Positions Location Toolbar [} 380] is displayed as the top toolbar.
Info
Shortcut
You can also add a single position at the current stage position by pressing the F10 button on
your keyboard.
1. In the Tiles tool, open the Positions section and click on Position Arrays.
2. In the left toolbar [} 375] in the Positions section, click on the Setup by array button.
à The Position Array Toolbar [} 380] is displayed as the top toolbar.
3. In the top toolbar, choose either the rectangular or circular Contour, adjust the Number
of required positions and the Bias where the positions should be located.
4. Mark the interesting area in the Center Screen Area with a pressed left mouse button.
The positions will be generated automatically.
Info
Random Distribution
If the Random checkbox is activated, the selected number of positions for the array is deter-
mined randomly within the arrays space.
Prerequisite ü You have selected and calibrated a sample carrier template [} 356].
ü You are on the Acquisition tab.
1. To obtain a complete overview of the sample carrier, zoom out of the view using the
mouse wheel. If necessary, use the panning tool (press Alt and the left mouse button) to
move the stage view as needed.
2. In the left toolbar [} 375] in the Positions section, click on the Setup by sample carrier
button.
à The Positions Carrier Toolbar [} 381] is displayed as the top toolbar.
3. Select the containers in which you want to distribute positions by pressing the Ctrl key and
clicking on the relevant containers.
4. In the top toolbar adjust the Number of required positions and the Bias where the posi-
tions should be located. Alternatively, if you want to create the positions as a Grid, adjust
the Columns, Rows, and Overlap.
5. Click on the Create button .
à The selected containers are each filled with a Position Array (group of positions).
à In the Positions section of the Tiles tool, the Position Arrays are displayed in the Posi-
tion Arrays list.
In the Tiles - Advanced Setup, you have several options to open/import images. Follow the re-
spective workflow.
You have now imported a preview scan image into the Tiles Advanced Setup.
To copy and paste a specific Tile Region or Position, follow these instructions.
Prerequisite ü You licensed the Tiles & Positions module and activated it in Tools > Modules Manager.
1. Right-click the respective tile region or position and select Copy. Alternatively, press Ctrl+C.
If you want to copy multiple tile regions and positions, select multiple regions or positions
by dragging a selection box across the objects or click to select multiple objects while press-
ing the Ctrl button.
2. Right-click in the setup and select Paste. Alternatively, press Ctrl+V.
The copied tile regions/ positions are now pasted next to the originally copied regions/ positions.
Prerequisite ü You licensed the Tiles & Positions module and activated it in Tools > Modules Manager.
1. Select the well from where the tile region/ position setting should be copied.
à The selected well is now highlighted by a blue border.
2. Right click within the selected well in the Center Screen Area (outside the tile region) to
open the context menu.
3. Select Copy Container for replication.
4. If you want to select specific wells, use the left mouse button to select the wells into which
you want to paste the copied tile region/ position setting. To select multiple wells, press
Ctrl while selecting the wells.
5. Right click in the Center Screen Area and select the context menu entry Paste Replica-
tion to and either choose Selected Container or All Container.
The copied tile region/ position setting is pasted into the selected wells or all the wells of the car-
rier with the same relative coordinates to the center of each well.
5.7.10.1 Introduction
To create local focus surfaces, you must distribute support points across your tile regions and as-
sign their focus position. Tile-region-specific focus values are then interpolated to generate a fo-
cus surface that approximates the topology of the area you want to image.
Prerequisite ü To create a local focus surface you need the Tiles & Positions module.
ü You have set up a Tiles experiment with at least one tile region.
ü You are on the Acquisition tab in the Tiles tool.
1. Click Show Viewer.
à The advanced tile setup opens.
2. Select a tile region for which you want to create support points. To do this, click on the cor-
responding tile region in the list in the Tile Regions section of the Tiles tool. Alternatively,
you can select tile regions by clicking directly on the desired tile region in the Advanced
Setup view. Both methods allow you to select several tile regions when pressing the Ctrl
key.
3. Open the Focus Surface and Support Points section in the Tiles tool.
4. To add a single support point at the current stage position, click Current Position. Alterna-
tively, you can add a single support point to the center of the currently selected tile region
by clicking Center of Tile Region.
5. Under Add Multiple Support Points, you have the settings to add multiple support
points. Indicate the number of Columns and Rows for the distribution of the reference
points. Alternatively, recommended for larger tile regions (>200 tiles), you can use the dis-
tribution method Onion Skin. Depending on the total size and shape you might need to
adjust the density parameter and/ or the maximum number of support points to optimize
the result. Typically, this method works best with large irregular or rounded tile regions.
6. Click Distribute.
à The support points are distributed within the selected tile region(s) and shown as yellow
points in the stage view.
à The support points of the selected tile region are displayed with their coordinates in the
Local (per Tile Region) list in the Focus Surface and Support Points section of the
Tiles tool.
7. If necessary, you can adjust the distribution of the support points manually in the Tiles -
Advanced Setup. You can change the position of the support points using drag & drop.
8. Repeat the steps until you have distributed reference points across all desired tile regions.
You have successfully distributed support points across the tile regions.
Info
Automatic Distribution
If you activate Auto-Distribute for New Tile Regions in the Focus Surface and Support
Points Section [} 366] of the Tiles tool, support points are automatically added and distrib-
uted for all newly created tile regions.
Info
Support Point Distribution
Distribute the support points evenly across your tile region. The more irregular the surface of
your specimen, the more reference points you should set. An even but tilted surface requires
at least 4 reference points for a solid calculation, while a simple saddle surface requires at least
9 reference points. A high reference point density leads to a more precise result, although the
maximum useful density is one reference point per tile. The generic method follows a simple
grid pattern to place the support points. It works well on regular rectangular tile regions
smaller in size. For larger (>200 tiles) tile regions the Onion Skin will likely provide better re-
sults. In some cases, trial and error might be needed to optimize the parameters.
1. Click on the Verify button in the Focus Surface and Support Points section of the Tiles
tool.
à The Verify Tile Regions/Positions dialog opens.
2. Select the Helper Method you want to use. This will support you in determining the z-val-
ues. The options are Autofocus (AF) and Definite Focus (DF). If you have neither then
you can only adjust z-values manually.
3. Click on the Move To Current Point button.
à The stage automatically moves to the support point that is highlighted in the reference
point list. Alternatively, you can also double-click on the support point you want to check
in the list .
4. In the Live mode use the Focus tool to adjust the z-value.
5. Click on the Set Z and Move to Next button.
à The checked reference point is marked with a green check mark.
à The stage moves automatically to the next support point in the list.
6. Repeat the last 3 steps until you have checked all the support points.
à The message All points have been verified appears.
7. Close the Verify Tile Regions/Positions dialog.
You have adjusted and verified the z-values of all support points.
Info
Positions always have a focus, which is determined by the z-value of the position. If you use
positions in addition to tile regions, you can verify the z-values of the positions with the help of
a similar dialog. Open this dialog by clicking on the Verify button in the Positions section of
the Tiles tool.
1. If necessary, select the interpolation level in the Interpolation Degree dropdown list in the
Focus Surface and Support Points section of the Tiles tool.
You have successfully created a local focus surface. You can now start the experiment. To ensure
that the tiles are acquired along the focus surface the software automatically selects the most ap-
propriate focus strategy. For more information on focus strategies read the chapter Working with
Focus Strategies [} 63].
Info
The minimum number of support points necessary per tile region is indicated in the Interpola-
tion Degree dropdown list for each entry. The calculation is more solid if the number of sup-
port points exceeds this minimum number. We therefore recommend that you only increase
the interpolation degree as far as the surface of the sample demands, even if you have set
more support points. If the number of support points does not correspond to the minimum
number for the selected interpolation degree, the interpolation degree will be reduced auto-
matically. By default, ZEN uses a second order interpolation degree that creates a parabolic fo-
cus surface and requires at least 9 support points. Typically, this will be suitable for many sam-
ples and imaging scenarios.
To create a global focus surface, you must distribute support points across your sample carrier
and define their focus position. A focus area across the sample carrier is then interpolated from
the values of these reference points.
Prerequisite ü You have configured the general settings for setting up a tile experiment (experiment created,
at least one channel defined, Tiles dimension activated).
ü To create a global focus surface, you need the Tiles & Positions module.
ü You are on the Acquisition tab in the Tiles tool.
1. Open the Sample Carrier section.
2. Click on the Select... button.
à The Select Template dialog opens.
3. Select the sample carrier template that you want to use.
4. Click on Options and select Copy And Edit....
à A copy of the existing template is generated and opened in the sample carrier editor.
5. To distribute support points across the sample carrier template, open the Global Support
Points section.
6. Select the containers in which you want to create global support points. To do this, press
the Ctrl key and click on the containers.
7. Click on the Distribute One Support Point For Each Selected Container button.
à One support point is assigned to each selected container.
à The support points are distributed automatically across the sample carrier.
à You can add further support points manually using the Add button .
8. To close the Editor window, click on OK.
9. To select the edited sample carrier template, click on OK.
à If you wish to re-edit the global support points at any time, click on the Edit icon in
the sample carrier section to open the sample carrier editor again.
10. To calibrate the sample carrier, click on the Calibrate button and follow the wizard.
You have successfully distributed support points across a sample carrier template and have se-
lected and calibrated it.
1. In the Tiles tool open the Focus Surface and Support Points section.
2. Go to the Global (on Carrier) tab.
à All the support points of the selected sample carrier template are displayed in the Sup-
port Points list.
3. Click on the Verify button.
à The Verify Global Support Points dialog opens.
4. Select the Helper Method you want to use. This will support you in determining the z-val-
ues. The options are Autofocus (AF) and Definite Focus (DF). If you have neither then
you can only adjust z-values manually.
5. Click on Move To Current Point.
à The stage automatically moves to the support point that is highlighted in the list. Alterna-
tively, you can also double-click on the support point in the list.
6. In the Live mode use the Focus (or SW Autofocus) tool to set the z-value.
7. Click on Set Z and Move to Next.
à The support point is marked with a check mark.
à The stage automatically moves to the next support point in the list.
8. Repeat the last 3 steps until you have checked all the support points.
à The message All points have been verified appears.
9. Close the Verify Global Support Points dialog.
You have adjusted and verified the Z-values of all support points.
1. Select the interpolation degree in the Interpolation Degree dropdown list in the Focus
Surface and Support Points section.
You have successfully created a global focus surface.
You can now set up your tile experiment using the sample carrier. Further information on this can
be found under: Using Sample Carriers [} 356]. To ensure the tiles are acquired along the focus
surface during the experiment the software automatically selects the most appropriate focus strat-
egy in the Focus Strategy tool. For information on focus strategies read the chapter Working
with Focus Strategies [} 63].
Info
The minimum number of support points necessary is indicated in the Interpolation Degree
dropdown list for each entry. The calculation is more solid if the number of support points ex-
ceeds this minimum number. We therefore recommend that you only increase the interpola-
tion degree as far as the surface of the carrier demands, even if you have created more sup-
port points. If the number of support points does not correspond to the minimum number for
the selected interpolation degree, the interpolation degree will be reduced automatically. In-
terpolation degree1 – Tilted Plane (at least 4 support points) is typically sufficient to com-
pensate for any tilting of the sample carrier.
In some cases it can be helpful to not only display the well number together with the acquired im-
ages (Path: Graphics > Frequent Annotations > Carrier Container Name) but also to create
certain additional annotations for different tile regions or positions, e.g. "control condition" or
"experimental condition 1".
For that purpose, the software allows you to add and edit names and categories to the different
tile regions/positions that have been generated.
Prerequisite ü You licensed the Tiles & Positions module and activated it in Tools > Modules Manager.
1. In order to assign individual names to different individual positions and/or tile regions in a
well plate experiment, in the Tiles tool click on the respective name in the Tile Regions or
Positions list. You can now edit the name field and press Enter to finish.
2. Repeat this step to rename different tile regions or positions.
3. To assign or edit categories of your tile regions/ positions, first select all desired tile regions/
positions that should be grouped in the same category.
4. Under the Tile Regions/Positions list, in the Properties section Category, click on the
Options button and select New from the dropdown list.
à The New Category window opens.
5. Enter a Name and add a Description for the selected tile regions/ positions.
6. Assign a Color for the new category by clicking on the color bar and choosing a preferred
color.
7. Click on OK to create the new category.
à The New Category window closes and the new category is created.
8. As Category choose the desired category for the selected tile regions/ positions from the
dropdown list.
The chosen category is now assigned to the selected tile regions/ positions.
Info
4 To display the name of your Tile Region/ Position later in your acquired image(s), go to
Graphics > Frequent Annotations > More… and select Image.Scene.Name from the
metadata list.
4 Note that a predefined category can also be applied to a differentiated selection of Tile Re-
gions/ Positions from more than one well.
Note also, that the assigned color is only used as a feature in the Tiles tab (Left Tool Bar
Area).
4 To display a Tile Region/ Position Category feature (Name and/or Description) in your ac-
quired image, you go to Graphics > Frequent Annotations > More…. Type “category”
in the search bar and select the desired feature to be displayed. (Although the option
"Color" is given, no reasonable element will be displayed by the software.)
4 To adjust parameters of your annotations (e.g. font size), right-click on it and go to For-
mat > Graphical Elements.
Prerequisite ü You have selected several different positions or tile regions and assigned different categories.
1. Under Positions or Tile Regions of the Tiles tool, select a position or tile region.
2. Right-click on the selected position/ tile, choose Sort and select By Category
The positions/ tiles will be sorted alphabetically according to the assigned categories.
Use a sample carrier template to display the size and appearance of your sample carrier (e.g. slide
or multiwell plate) in Advanced Setup. This allows you to distribute tile regions or positions eas-
ily across your sample carrier. For the use of sample carriers, you need to have licensed the Tiles
& Positions module and activated it in Tools > Modules Manager.
Prerequisite ü You have configured the general settings for setting up a tile experiment (experiment created,
at least one channel defined, Tiles dimension activated).
ü You are on the Acquisition tab in the Tiles tool.
1. Open the Sample Carrier section and click on the Select... button.
à The Select Sample Carrier Template dialog opens.
If you want work with a sample carrier that is not listed in the template database, you will need to
apply the following workflow in order to create a new template.
Prerequisite ü You have done all prerequisites for a Tiles & Positions experiment
ü You have defined at least one channel.
ü You have activated the Tiles checkbox.
1. Go to the Acquisition tab.
2. Open the Tiles tool and activate the Show All mode.
3. Open the Sample Carrier section and click on the Select… button.
à The Select Template dialog opens.
When you want to take images of positions/ tile regions on a sample carrier, that has to be taken
off the stage, e.g. for incubation purposes or changes of the immersion medium, proceed as fol-
lows to re-position your sample carrier.
Prerequisite ü You have run the stage calibration and have located your sample, see chapter Calibrating the
Stage and Selecting the Channel [} 343].
ü You have set up at least one channel and adjusted the light/ camera exposure time.
ü You have activated the Tiles checkbox and the Show All mode
1. In the Tiles tool, open the Sample Carrier section and click on Select.
à The Select Template dialog opens.
6. Move a sample reference point into the middle of the crosshair. This reference point can be
any unique, identifiable point on the slide and does not have to in the middle of the slide.
7. Click on Next.
8. Under X/Y Position click on Set Zero.
9. Click on Next.
10. As Calibration Method, select Search Reference Point
11. Click on Next.
12. Click on Set Current X/Y.
13. In the Tiles tool, click on Show Viewer and add positions/tile regions at your locations of
interest.
14. Once you have defined all your positions/ tile regions, click on Options on the Ac-
quisition tab to in the Experiment Manager.
15. Save your experimental settings, including the lists of positions/ tile regions, by selecting ei-
ther the Save As or Export entry.
16. Start your experiment and record images from your selected positions/ tile regions.
17. Remove your sample off the stage and e.g. put it back into the incubation chamber.
18. Close the software.
You have done all settings for a successful re-positioning of your sample carrier after the experi-
ment.
Info
4 For demo purposes, select a standard slide that can mimic your test sample.
4 Regarding calibration of your template, you can customize your own carrier see the chap-
ter Customizing a Sample Carrier Template [} 357], but for slides with one coverslip or
well, there is only the option for Single Reference Point Calibration. For Multi-well plates,
you will have the option for 7-point, 4-point, 3-point or 1-point calibration. This becomes
important for adjusting for the rotation of the sample.
4 It is assumed that you just use a conventional glass slide with some cells or tissue that is
positioned in the center.
4 You can zoom in and out using the mouse scroller, and move the stage in the Center
Screen Area to a point of interest with a double-click on the sample carrier.
4 With Save As the settings will be saved directly in the Experiment Manager. With Export
the settings will be saved in a folder of your choice.
Info
If you cycle the power on the microscope the software will prompt you, to calibrate the stage
and/ or focus drives. Thus, if the calibration of the multi-well plate was performed under the
same conditions, then the sample carrier calibration will still be valid. You must however en-
sure that other parameters like plate orientation and placement on the microscope have not
changed.
4. Move your previously chosen sample reference point into the middle of the crosshair.
5. Click on Next.
6. Under X/Y Position click on Set Zero.
7. Click on Next.
8. As Calibration Method, select Search Reference Point.
9. Click on Next.
10. Click on Set Current X/Y.
11. Now, you still need to verify the Z-offset of your positions. Therefore, follow the corre-
sponding instructions given in the chapters Adjusting Z-Values of Tile Regions [} 341] and
Adjusting Z-Values of Positions [} 342].
All of your selected positions/tile regions are now re-assigned to the correct X/Y/Z-values in rela-
tion to your (unique identifiable) reference point.
You can re-start your experiment and record images from your selected positions/tile regions.
Info
The basic Tiles tool is only visible if you have a motorized stage configured with your micro-
scope. The Tiles Advanced Setup and many other functions are only available if you own the
Tiles & Positions module and when it is activated in the Modules Manager. Additionally,
you must activate the corresponding checkbox on the Acquisition tab in the Experiment
Manager. This tool is part of the basic license for LSM.
In the Tiles tool you configure the acquisition of images that consist of several image fields.
Therefore you define Tile Regions or Positions. In addition you can set up focus surfaces and
sample carrier templates here.
The Tiles tool is located in the Left Tool Area under Multidimensional Acquisition.
Parameter Description
Show Viewer Only available if you have the license for the Tiles & Positions func-
tionality.
Opens the Tiles Advanced Setup [} 372] view in the Center Screen
Area.
Info
The Sample Carrier, Focus Surface and Options sections are only visible if the Show All
mode is activated.
If you have no license for the Tiles module you will only find Tile Regions, Positions and Op-
tions sections here.
The different sections of the tool are described in the next chapters.
Here you can define the desired tile regions and add it to the image.
Note: The first section with controls is only visible if you have no license for the Tiles & Positions
module. With a license, these controls are selected from the Left Toolbar [} 375] in the Tiles Ad-
vanced Setup.
Parameter Description
Contour This parameter is only visible if the Show All is activated.
Here you select the shape or contour of the tile region that you are
adding. Simply click on the corresponding button to select the desired
contour. The selected contour is highlighted in blue color.
Mode
- Tiles If selected, you have to enter the number of tiles as a reference for
the size of the tile region.
Enter the number of tiles in the X/Y input fields. If you are adding a
circular tile region, enter the number of tiles for the diameter in the
Diameter input field.
- Size If selected, you have to enter the size as a reference for the size of the
tile region.
Enter the size of the tile region in the X/Y input fields. If you are
adding a circular tile region, enter the diameter of the tile region in
the Diameter input field.
- Stake If selected, you can define a tile region by the placement of at least
two markers (user defined X/Y stage coordinates). If you want to
modify the tile region (expand/reduce) you have to adjust the tile re-
gion to the desired size. To complete the tile region press Done. Cir-
cular or rectangular tile region can be created in this manner by selec-
tion of the appropriate contour.
- Add Adds the tile region to the image. The added tile region will also ap-
pear in the Tile Regions List and is activated for acquisition.
Added tile regions are displayed in the form of red grids in the stage
view of the Advanced Tiles Setup.
Parameter Description
Tile Regions List Displays the added tile regions. The list contains the following col-
umns:
- Name Here you can edit the name of the tile region.
- Category Displays the category of the tile region. Categories can be defined in
the view options of the advanced tiles setup on the properties tab.
- Size Displays the size of the tile region along its x and y axes in microme-
ters.
With the Up/Down buttons you can shift the selected list entry one
Up and position up or down in the tile regions list. This allows you to modify
Down the acquisition order. Note that the order in the list will only be re-
spected if the sorting of tile regions/ positions is deactivated in the
Options section (Stage Travel Optimization)!
- Set Current Z Sets the current z-position for all selected tile regions.
for Selected
Tile Regions
- Set Current X/ Sets the current X/Y/Z-Position for all selected tile regions.
Y/Z for Se-
lected Tile Re-
gions
- Unlock Unlocks the current tile region. The tile regions or positions are only
locked if created in carrier mode.
- Sort By Center Position (Y -> X) sorts all tile regions according to their
overall Y position.
By Center Position (X -> Y) sorts all tile regions according to their
overall X position.
By Category sorts all tile regions according to their category.
Parameter Description
Note that the order in the list will only be respected if the sorting of
tile regions/positions is deactivated in the Options section (Stage
Travel Optimization)!
- Convert to Po- Converts a selected tile region into Positions or a Position Array.
sitions…
Properties Tile Re- Only visible if you have license for the Tiles & Positions module.
gion Displays the name of the currently selected tile region if one region is
selected.
Category Only visible if you have license for the Tiles & Positions module.
Shows the currently assigned category of the selected tile region. The
Default category is set for all new tile regions.
Only visible if you have license for the Tiles & Positions module.
Options Opens the options for editing and creating categories.
- Edit Opens the Edit Category dialog to edit the selected category.
- Delete Deletes the selected category and sets the category of the tile region
to Default.
X Only visible if you have license for the Tiles & Positions module.
Displays and sets the x-value of the selected tile region.
Y Only visible if you have license for the Tiles & Positions module.
Displays and sets the y-value of the selected tile region.
Z Only visible if you have license for the Tiles & Positions module.
Displays and sets the z-value of the selected tile region.
Set Current Z Only visible if you have license for the Tiles & Positions module.
Sets the Z dimension at the current Z position of the focus drive.
Width Only visible if you have license for the Tiles & Positions module.
Displays and sets the width of the selected tile region.
Height Only visible if you have license for the Tiles & Positions module.
Displays and sets the height of the selected tile region.
Verify Only visible if you have license for the Tiles & Positions module.
Opens the Verify Tile Regions dialog. There you can verify each
point of the tile region according focus und position.
Parameter Description
X Position Displays the x coordinate of the current position.
Adds the current position to the Positions List and activates it for ac-
quisition.
Add
Parameter Description
Single Positions Shows the Single Positions List.
To learn more about single positions see glossary "Position".
Position Arrays Shows the Position Arrays List and the Positions of selected Ar-
ray List, that shows a full Single Positions List for the selected posi-
tion array.
To learn more about position arrays see glossary "Position".
Single Position Displays the added positions/ position arrays. The list contains the fol-
List/Position Array lowing columns and buttons.
List
– Category Displays the category of the single position. Categories can be defined
in the view options of the advanced tiles setup on the properties tab.
Parameter Description
– With the buttons you can shift selected list entry one position up or
and down in the tile regions list. This allows you to modify the acquisition
order.
Note that the Tile Regions/Positions checkbox has to be deacti-
vated in Tiles Options [} 384].
Note that the order in the list will only be respected if the sorting of
tile regions/ positions is deactivated in the Options section (Stage
Travel Optimization)!
Verify Opens the Verify Tile Regions or Verify Positions Dialog [} 371].
Properties Position Only visible if you have license for the Tiles & Positions module.
Displays the name of the currently selected position.
Category Only visible if you have license for the Tiles & Positions module.
Shows the currently assigned category of the selected tile region. The
Default category is set for all new tile regions.
Only visible if you have license for the Tiles & Positions module.
Options Opens the options for editing and creating categories.
- Edit… Opens the Edit Category dialog to edit the selected category.
- Delete Deletes the selected category and sets the category of the tile region
to Default.
X Only visible if you have license for the Tiles & Positions module.
Displays and sets the x-value of the selected tile region.
Y Only visible if you have license for the Tiles & Positions module.
Displays and sets the y-value of the selected tile region.
Z Only visible if you have license for the Tiles & Positions module.
Displays and sets the z-value of the selected tile region.
Set Current Z Only visible if you have license for the Tiles & Positions module.
Sets the Z dimension at the current Z position of the focus drive.
Parameter Description
Set Current Z for Sets the current Z-Position for all selected positions.
Selected Positions
Parameter Description
Set Current XYZ Sets the current X-Y-Z-Position for the selected position.
for Selected Posi-
tion
Import stage Only visible if you have license for the Tiles & Positions module.
marks as positions
Imports the marks from the Stage tool as positions.
Parameter Description
Set Current Z for Sets the current z-position for all positions in the selected arrays.
all Positions in se-
lected Arrays
Only visible if the Show All mode is activated and only available with a license for the Tiles & Po-
sition functionality.
Parameter Description
Sample Carrier Displays the selected sample carrier template. If no template is se-
lected it will display None.
Opens the sample carrier selection/editor dialog where you can edit
and add global support points to the selected sample carrier.
(Edit Support
Points)
Deletes the selected sample carrier from the sample carrier field. The
template will still be available in the Select Sample Carrier Tem-
Delete
plate dialog.
Move Focus Drive Activated: Moves the focus drive to the loading position during the
to Load Position movement of the stage to another container of the sample carrier
Between Contain- (e.g. a well or slide). This prevents possible damage. Note that this be-
ers havior is only applied during an experiment.
See also
2 Select Template Dialog [} 386]
Parameter Description
Selected Tile Re- Displays the number of currently selected tile regions.
gion
Current Position Adds a support point at the current stage and focus position. Only
available if the stage is positioned within the yellow bounding of the
selected tile region.
Center of Tile Re- Adds a support point at the center of the currently selected tile re-
gion gion.
- Generic Distribution method with a simple column and row approach. ZEISS
recommends using this method for smaller tile regions (<200 tiles) of
a regular shape, e.g. quadratic, rectangular, and circular.
- Onion Skin Distribution method for mid- or larger tile regions (>200 tiles) of an ir-
regular shape like you might use to image large area tissue specimens,
e.g. brain slices.
Parameter Description
Rows Only available with Generic. Sets the number of rows of support
points within the selected tile region.
Distribute Distributes the entered number of support points defined in the col-
umn and row input fields within the tile region. Previously defined
support points will be deleted.
Auto-Distribute for Activated: Automatically adds and distributes support points for all
New Tile Regions newly created tile regions.
Deactivated: Support points are distributed only via the Distribute
button.
Local (per Tile Re- Displays a list with local or global support points. You have the fol-
gion)/Global (on lowing columns and options:
Carrier) tab
- Container Allows you to sort the global support points according to their con-
tainer on the sample carrier.
Parameter Description
- Deletes the selected list entry.
Delete
-
Options
– Add Support Adds a new support point at the current stage and focus position.
Point at Cur-
rent Stage
and Focus Po-
sition
– Set Current Z Sets the current Z-Position for all selected support points.
for Selected
Support
Points
– Set Current X/ Sets the current X-Y-Z-Position for the selected support point.
Y/Z for Se-
lected Support
Points
– Delete All Deletes all support points from the current tile region.
– Delete all Sup- Deletes all support points from the selected tile regions.
port Points
from Selected
Tile Regions
– Delete all Sup- Deletes all support points from all tile regions.
port Points
from all Tile
Regions
Set current XYZ Sets the current x/y/z position for the selected support point.
Set current Z Sets the current z-position for all selected support points.
Info
Focus Surface (Verify)
The more variable the surface of your specimen, the higher you should choose the interpola-
tion degree. For higher degrees you need more support points. The minimum number of sup-
port points required for each interpolation degree is given in the dropdown list. As an over-
achievement of this minimum number ensures a solid calculation, we recommend minimizing
the interpolation degree even if you added more support points. Increase the interpolation de-
gree only so far as the surface condition of your specimen demands. If the number of support
points is too low for the selected interpolation degree, the next lower level for which the mini-
mum is fulfilled is used. By default, ZEN uses the second order parabolic saddle surface that re-
quires at least 9 support points. For most applications you will not need to adjust this setting.
Info
Properties of Global Support Points
The properties of a selected global support point slightly differ from those of a local one, as
you cannot edit the X/Y dimensions because they are fixed by the sample carrier template you
have selected. Therefore there is no Set Current X/Y/Z button for global support points. If
you want to edit the number and XY dimension of your global support points, this can be
done directly via the Sample Carrier section of the Tiles tool.
Parameter Description
Tile Overlap Defines the overlap in percent of individual tiles of the tile regions.
The value is set to 10% by default.
Note that lower overlap might cause artifacts when stitching the im-
age as there is less information for a robust correlation. No overlap
will not allow the images to be stitched correctly.
Stage Travel Opti- In this section you can adjust settings for stage traveling during an ex-
mization periment or preview scan ( only with Tiles & Positions module). Note
that in some cases the preview scan function will automatically select
a travel mode for the stage that is more appropriate.
- Comb Acquires tile regions following a comb pattern – always from one
travel direction only (left -> right). This scan movement is more pre-
cise.
Parameter Description
- Spiral Acquires tile regions following a spiral pattern – from the center of
the region to the outer bounds in a clockwise motion. This mode
works only for regions with rectangular or elliptical contours.
Tile Regions/Posi- Activated: Individual positions and tile regions are not acquired in
tions the sequence in which they are defined in the Tile Regions list.
The stage movement will be automatically adapted to the location of
the individual tile regions and positions. If you add or remove tile re-
gions or positions, the sequence of acquisition therefore also changes.
- Sort by X, The tile regions and positions are sorted by their absolute position
then Y (first x, then y).
- Sort by Y, The tile regions and positions are sorted by their absolute position
then X (first y, then x).
Carrier Wells/Con- Activated: Applies the selected travel patterns (meander or comb)
tainer when acquiring tiles in sample carriers with wells/containers. When,
for example meander is used, the stage travel between wells will be
e.g. A1, A2 ....A4 --> B4, B3 .....B1 etc.
- Comb Acquires tile regions following a comb pattern – always from one
travel direction only (left -> right). This scan movement is more pre-
cise.
Use Stage Speed This function is only visible if the corresponding option is activated in
from Stage Tool Tools > Options > Acquisition > Tiles & Positions. By default,
backlash correction will be used during stage movement.
Activated: The software uses the stage travel speed which is set in
the Stage tool (Right Tool Area).
Stage and Focus Activated: Stage and focus positioning is done with a backlash cor-
Backlash Correc- rection which is more precise but slightly slower.
tion
Move Focus to Activated: The focus drive is moved to the load position while mov-
Load Position Be- ing to another tile region, position, or well.
tween Regions/Po-
sitions
Split Scenes into Activated: The scenes (e.g. tile regions and positions) are stored into
Separate Files separate physical files. They are still combined into one logical image
file.
Note that the z values of positions, tile regions (unless they have at least one support point), local
support points, and global support points are verified in a separate dialog accessed via the Focus
Surface and Support Points section. As the dialog contains the same items and options for veri-
fying the Z values it is described here once.
Parameter Description
Tile Regions/Posi- Displays all the tile regions (TR), positions (P), or support points (SP)
tions List currently active in your experiment. The list contains the following col-
umns and buttons:
- Status Here you can see if the Z-Position is already verified. Then a green
checkmark will appear in the corresponding row.
- Z (µm) Displays the z-position of the tile region, position, or support point.
- Tile Region Shows the tile region to which the local support point belongs.
- Opens the options menu for verifying tile regions/positions, see de-
scription below.
Options
Verification Helper
Method
- None If selected, you need to manually adjust the focus for each point to be
(manual ad- verified.
justment)
- Autofocus Only available if your system has a motorized focus drive (z-axis) and
(AF) the software autofocus module.
If selected, you can use the software autofocus for adjusting the focus
for each point to be verified. The corresponding buttons will then ap-
pear in the dialog.
- Definite Focus Only available if your system has a Definite Focus device.
(AF)
If selected, you can use the definite focus for adjusting the focus for
each point to be verified. The corresponding buttons will then appear
in the dialog further below.
Include Z when Activated: ZEN moves to the selected object and adjusts the position
Moving to Points of the z-drive to the currently assigned z-value.
Set Z & Move to Sets the current z value for the selected support point and sets the
Next status to verified. Then the software moves the stage to the next sup-
port point.
Parameter Description
Run AF (or DF) and This button is only visible if you have selected Autofocus (AF), Re-
Set Z flex Autofocus (AF), or Definite Focus (DF) as Helper Method.
Runs the software autofocus/definite focus and sets the current z po-
sition to verified.
Use AF (or DF) to This button is only visible if you have selected Autofocus (AF), Re-
Verify the Remain- flex Autofocus (AF), or Definite Focus (DF) as Helper Method.
ing
Automatically moves to the remaining points and determines the z
value using the selected autofocus option for each point.
Parameter Description
Current Point Veri- Here you can change the status of the selected support point from
fied verified to unverified (or vice versa).
Set Current Z for The current z value is set for all selected points. You can press Ctrl
Selected Points and the left mouse button to select multiple points, or Ctrl + A to se-
lect all.
Set Current Z for The current z value is set for all points.
all Points
Apply Z-Offset... Opens al dialog to apply a z-offset for all or the selected points.
Here you can visualize and plan your Tiles and Positions experiment. The advanced tiles setup
opens by clicking on the Show Viewer button in the Tiles tool on the Acquisition tab of the
Left Tool Area. In the Center Screen Area you can see the Stage view [} 373]. When the Tiles
Advanced Setup opens, the stage view is zoomed to a predefined factor. You can change the
Zoom in the Dimensions tab, or by scrolling the mouse wheel.
To navigate around you have the following options:
§ Move the mouse pointer of the live navigator, then press and hold the left mouse button and
drag the live navigator to the desired location. Release the left mouse button to finish.
§ Place the mouse pointer at a location where you want to move the stage and double click.
§ Use the software joystick in the Stage tool of the Right Tool Area to navigate. Alternatively
use the hardware joystick of your stage controller.
§ In each corner and along each edge with the arrowheads, you can click to move the view in
this direction. Additional settings and tools relating to tile regions or positions can be found in
the specific view options.
1 2
1 Left Toolbar
Here you select which setup you use to set tile regions or position. for more information,
see Left Toolbar [} 375].
2 Top Toolbar
Depending on what you have selected on the Left Toolbar, different features to create
tiles or positions are displayed here. For more information, see Top Toolbar [} 376].
3 Stage View
The Stage View shows the full travel range of the microscope stage, along with the cur-
rent stage position, the graphical display of sample carriers, and your acquired mosaic
images. For more information, see Stage View [} 373].
4 View Options
General and tiles specific [} 382] view options are displayed here.
Note that the functionalities between activated and deactivated Show All layout might
differ.
The Center Screen Area shows the full travel range of the microscope stage, along with the cur-
rent stage position, the graphical display of sample carriers and your acquired mosaic images. You
can control the stage view using the arrow icons at the edges of the image area. The view can be
enlarged, reduced or moved using the general control elements.
Live Navigator Shows the current stage position including the live im-
age as a frame outlined in blue. To move the frame,
double-click on the position to which you want to
move it. Alternatively hold the left mouse button on
the live navigator tool while dragging the mouse.
The frame can also be used to control acquisition. If
you click on one of the frame's blue arrow icons, an
image is acquired. The Live Navigator tool is moved
one frame width in the relevant direction. You can cre-
ate tile images of your sample easily in this way.
Tile Region Represents the tile regions in the stage view by a red
grid.
Position Array Represents the position arrays in the stage view by the
corresponding position symbols surrounded by a
dashed line.
A right-click in the Stage View displays a context menu with Tiles & Positions specific options
[} 374].
When you right click in the Stage View of the Tiles Advanced Setup, you have the following
Tiles & Positions specific options:
Parameter Description
Show Images Activated: Displays images in the Stage View.
Remove Images Displays the options of what images are removed from the Stage
View.
Parameter Description
– All Manual Removes all manually taken snap images.
Snap Images
– All Preview Removes all preview scan images including preview images imported
Scan Images into the Stage View.
Import Preview Opens the file browser to select an image which gets imported into
Image the Stage View.
Save Preview Im- Opens the preview images in separate image documents in ZEN,
ages which allows you to save them or work with them if necessary.
Bring Navigator Re-centers the Stage View onto the current stage position coordi-
into View nates at the preset zoom level. You can also activate this function
with Ctrl + B.
Show Tile Regions Activated: Displays the names of the of the tile regions in the Stage
Name View.
Show Tile Rectan- Activated: Displays a red rectangle around the individual positions in
gle for Positions the Stage View.
In the Tiles - Advanced Setup your main features to create regions and positions are in two
toolbars on top and on the left of the viewer. The options available in the top toolbar depend on
what is selected in the left toolbar.
Here you select which setup you use to set tile regions or positions.
Parameter Description
Displays the tools to define the settings for a preview scan, see Pre-
view Scan Toolbar [} 376]. Typically a low magnification objective
Preview and a channel which protects your sample (e.g. transmitted light) is
used. This gives you a low resolution overview of the sample to mark
tile regions and/or positions.
Tiles Here you can select which setup you want to use to set the tile re-
gions. The corresponding tools are displayed in the top toolbar, see
Top Toolbar [} 376].
– Displays the tools to define tile regions by means of contour, see Con-
tour Toolbar [} 377].
Setup by con-
tour
Parameter Description
– Displays the tools to define tile regions by specifying two or more
marker positions, see also Stake Toolbar [} 378].
Setup by
stakes
Positions Here you can select which setup you want to be used to set the posi-
tions. The corresponding tools are displayed in the top toolbar, see
Top Toolbar [} 376].
Depending on what you have selected on the left toolbar of the Tiles - Advanced Setup, differ-
ent features to create tiles or positions are displayed. For information on the various top toolbars,
see the list below.
See also
2 Left Toolbar [} 375]
Here you can define the settings for a preview scan. Typically a low magnification objective and a
channel which protects your sample (e.g. transmitted light) is used. This will give you a low resolu-
tion overview of the sample to mark tile regions and/or positions.
Parameter Description
Use Experiment Uses the settings of the experiment for the preview scan.
Settings
Parameter Description
Sets the binning of the camera for the preview scan experiment only.
The exposure time will be compensated automatically for the preview
scan. Note that higher binning will reduce the scan time (due to
shorter exposures of the camera), but reduce the image resolution.
Delete existing Activated: Deletes the already existing preview images before the
preview images preview scan is started.
See also
2 Generating a Preview Scan [} 345]
Here you can define the tile regions by means of the contour.
Parameter Description
With this tool you can select an already created tile region by clicking
on it to move or edit it.
Selection Mode
Keep Tool Activated: Keeps the selected tool active. You can use the tool sev-
eral times in succession without having to reselect it.
See also
2 Creating Tile Regions by Contour [} 346]
Here you can define the tile regions by means of the number or size.
Parameter Description
Draws a rectangular tile region.
Rectangle
Parameter Description
Tiles Using this mode you have to enter the number of tiles as a reference
for the size of the tile region.
Enter the number of tiles in the X/Y input fields. If you are adding a
circular tile region, enter the number of tiles for the diameter in the
Diameter input field.
Size Using this mode you have to enter the size as a reference for the size
of the tile region. Enter the size of the tile region in the X/Y input
fields. If you are adding a circular tile region, enter the diameter of
the tile region in the Diameter input field.
Add Tile Region Adds the tile region to the Tile Regions List and activates it for ac-
quisition.
Added tile regions are displayed in the form of red grids in the stage
view of the Advanced Tiles Setup.
Keep Tool Activated: Keeps the selected tool active. You can use the tool sev-
eral times in succession without having to reselect it.
See also
2 Creating Tile Regions by Predefined [} 347]
Here you can define the tile regions with stake markers.
Parameter Description
Adds a marker position to the current stage and focus position.
Add Marker Posi-
tion
Parameter Description
Removes the last marker position.
Remove Marker
Position
Done Saves the current tile regions setup and resets the marker positions.
Here you can define the tile regions automatically by means of the fill factor of the sample carrier.
Info
4 A sample carrier must have been selected in the Sample Carrier section of the Tiles tool.
4 Manually created tile regions and positions (setup by Contour and setup by Predefined)
will be deleted if you switch to the setup by Carrier. If you want to combine manual and
automatic setup, first use setup by Carrier and then switch to a manual setup. Tile regions
that are created automatically by setup by Carrier, are defined to a container and perma-
nently assigned and locked by default, against manual editing. You can unlock the tile re-
gions in the Tiles tool by selecting the desired tile region and unlocking it via the options
menu in the tile regions list. You can also unlock all tile regions here if necessary.
Parameter Description
Select Sample Car- Only visible if no sample carrier is currently selected.
rier
Opens the dialog to select a sample carrier template.
Fill Factor Here you can enter the fill factor used to fill the selected container.
Columns/Rows Here you can add single tile regions to a container by defining the
number of columns and rows of the tile. The tile region is always
placed at the center of the well container.
See also
2 Tiles Carrier Toolbar [} 379]
2 Sample Carrier Section [} 365]
2 Tiles Tool [} 359]
2 Creating Tile Regions by Carrier [} 347]
Here, you can define the positions by means of the location. You can add various positions in the
Stage View using the mouse.
Parameter Description
Selects an element in the stage view to edit or move it.
Selection
Keep Tool Activated: Keeps the selected tool active. You can use the tool sev-
eral times without having to reselect it.
See also
2 Creating Positions by Location [} 348]
Here you can define the positions by means of position arrays. You can add various contours for
position arrays in the stage view.
Parameter Description
With this tool you can select an already created position array to
move or edit it.
Selection
Keep Tool Activated: Keeps the selected tool active. You can use the tool sev-
eral times in succession without having to reselect it.
Distribute Posi-
tions by
Parameter Description
Adjusts the overall position of the single positions in the position ar-
ray.
- None The single positions of the position array will be distributed evenly
within the array.
- Center The single positions of the position array will mainly be distributed
near to the center of the position array. Less positions will be at the
edges of the array.
- Edge The positions of the position array will be distributed to the edges of
the array. Less positions will be in the center of the array.
See also
2 Creating Positions by Array [} 348]
Here you can define the positions automatically on the relevant sample carrier.
Info
4 A sample carrier must have been selected in the Sample Carrier [} 365] section of the
Tiles tool.
4 Manually created tile regions and positions (setup by Contour and setup by Predefined)
will be deleted if you switch to the setup by Carrier. If you want to combine manual and
automatic setup, first use setup by Carrier and then switch to a manual setup. Tile regions
that are created automatically by setup by Carrier, are defined to a container and perma-
nently assigned and locked by default, against manual editing. You can unlock the tile re-
gions in the Tiles tool by selecting the desired tile region and unlocking it via the options
menu in the tile regions list. You can also unlock all tile regions here if necessary.
Parameter Description
Select Sample Car- Only visible if no sample carrier is currently selected.
rier
Opens the dialog to select a sample carrier template.
Removes the position array for every selected container of the sample
Remove carrier.
Parameter Description
Distribute Posi-
tions by
- None The single positions will be distributed evenly within the array.
- Center The single positions will mainly be distributed near to the center of
the position array. Less positions will be at the edges of the array.
- Edge The positions will be distributed to the edges of the array. Less posi-
tions will be in the center of the array.
See also
2 Creating Positions by Carrier [} 349]
This view options are specific for the Tiles & Positions module.
Here you can define how the software behaves when the live image is started and the Tiles - Ad-
vanced Setup is open.
Parameter Description
New Tab Opens the live or continuous image into a new image container.
Separate Container Opens a live or continuous image into a separate container. Both the
Tiles - Advanced Setup and the live or continuous images are visible
in parallel. We recommend activating this checkbox, especially if you
use two separate monitors. You may also want to use the Automatic
Contanier Layout setting found in the view option dropdown list of
the Document Bar [} 28] in the Center Screen Area [} 26].
Tiles View Displays the live or continuous image in place in the Stage View of
the Tiles - Advanced Setup.
Center to Stage You can use this function if you need quickly to re-find your current
Position location. Re-centers the Stage View onto the current stage position
coordinates at the preset zoom level. You can also activate this func-
tion with Ctrl + B. You can call the function to focus the image with
the mouse wheel by pressing and holding the Ctrl key when the
mouse cursor is over the stage view.
Info
Only the containers/wells whose tile regions and positions were set up with the Tiles Carrier
Toolbar [} 379] or the Positions Carrier Toolbar [} 381] will be taken into account.
Symbol Description
Empty containers/wells, meaning that no tile regions or positions
were set up with the Carrier option, are represented by a grey con-
/ tainer/well.
Symbol Description
The currently Active container/well is represented by a blue border.
Info
Right-click opens a small context menu. Here you can copy the contents of the selected well,
or paste the contents to the selected or all wells.
The additional options for the tiles module allow to set up several options for image acquisition
and additional information. The tiles options dialog can be found in the menu bar under Tools >
Options > Acquisition > Tiles & Positions.
Note: For the changes to be effective you might need to close and reopen the advanced setup
viewer.
Parameter Description
Automatically Activated: Automatically starts the Live mode in the Center Screen
Start Live Mode in Area if you klick in the Acquisition tab in the Tiles tool on the Ad-
the Advanced vanced Setup button.
Setup View
Uncheck this option to prevent unnecessary specimen bleaching. The
default is not activated.
Automatic Snap by Activated: Acquires an image if you click on one of the frame's blue
Clicking the Live arrow icons. The Live Navigator tool moves one frame width in the
Navigator Buttons relevant direction. You can create tile images of your sample easily in
this way.
Enable Stage In the Live navigator tool the current stage position including the live
Movement with image is shown as a frame outlined in blue. To move the frame, dou-
Live Navigator ble-click on the position to which you want to move it. Alternatively,
place the mouse cursor over the blue frame, press and hold the left
mouse button and drag the live navigator to the desired location.
Activated: Allows you to move the Live Navigator tool by dragging
it to a new location.
Parameter Description
Show Stage and Activated: In the Tiles option, the setting to switch the backlash cor-
Focus Backlash rection on or off is shown. Per default it is hidden.
Correction Setting
in the Options
Delimiter for CSV Specifies the delimiter for a CSV export or import. Select Comma (de-
Export/Import fault), Semicolon or Tab.
Ask Whether Sup- When the support points and/or positions are determined by a soft-
port Points/Posi- ware autofocus run the existing points can be overwritten with the
tions Should be new Z values.
Overwritten
Activated: Shows a message box asking if the points should be over-
written if there is a autofocus Z value.
Focus Surface Out- Ignores support points that are significantly outside the interpolated
lier Determination focus surface.
You have the following setting options available:
- Maximum Inter- This value can be 0 or 1. If 1 then a linear fit is used to detect the out-
polation Degree lier support points. This is the default. If 0 a simple average value is
for Outlier De- used to detect outliers.
tection
Delay Time After Defines a delay period which is used for all stage movements in a tiles
Stage Movements and position experiment or movements controlled in the advanced tile
setup. The delay helps prevent movement in samples where, for ex-
ample, a large volume of liquid is present in the sample holder. It can
be used with the stage speed and acceleration options to optimise ex-
periments with this type of sample.
Binning Compen- Defines the power to which the binning ratio is modified to automati-
sation of Exposure cally determine the exposure time value used for a preview scan were
Time in Preview the binning setting between the experiment and preview scan differs.
Scans The default value is 2.0 i.e. quadratic. Thus, for example the exposure
time would be reduced by a factor of four if the experiment binning is
1x1 and the preview scan binning is 2x2. The value can be varied be-
tween 1.0 and 2.0 in steps of 0.1.
- Use Imaging Activated: Default setting for the live image that allows navigation
Device from and focus interaction during the carrier calibration wizard.
Selected
Channel with
"Acquisition"
Settings
Parameter Description
- Use Active This option is only relevant for systems with a wide field (camera
Camera with based) detector.
"Locate" Set- Activated: Allows you to alternatively apply locate camera settings
tings for use in the carrier calibration wizard (live image). By default the ex-
periment settings for the currently selected channel/track will be used.
In this dialog you can select a sample carrier or enter a dialog to create a custom carrier. A pre-
view of the selected carrier is shown on the right side of the list.
Parameter Description
Workgroup Tem- Displays a list of all custom sample carrier templates.
plates
Program Tem- Displays a list of predefined sample carrier templates for several sam-
plates ple carriers. When working with ZEN celldiscoverer, the available tem-
plates are grouped based on their type.
Options
– New Tem- Opens the New Sample Carrier Template dialog [} 387] to create a
plate... new template.
– Show/Edit... Opens the selected template in the Edit Sample Carrier Template di-
alog [} 387] and allows editing.
ZEISS templates are read only. If you want to edit a ZEISS template,
you have to use the Copy and Edit... option.
– Copy and Copies the selected template and opens it in the Edit Sample Carrier
Edit... Template dialog [} 387] dialog for editing.
– Refresh Tem- Refreshes the list of templates after creating a new one.
plates
The dialog for editing a template and creating a new one is the same. If you create a new tem-
plate the dialog fields are empty. On the right you see a live template preview of the setting for
the template and global support points.
1
2 Preview
Here you can see a preview of the sample carrier.
Parameter Description
Name Shows the name of your template. You can enter/ edit the name.
Parameter Description
Category Shows which sample carrier category the template uses. You can
choose between Slide, Multislide, Petri Dish, Multiwell, Multi-
chamber and Custom. The category defines the overall appearance
of the template and affects the further editing possibilities of the tem-
plate.
Size
Container Depending on the category of the template you have different op-
tions for editing in this section.
– Type Shows if the containers of the template are rectangles or circles. If the
category is Custom, you can manually set the type of containers.
Area
– Location Defines the position of the templates reference point. You can choose
Mode between Center, Top Left, Top Right, Bottom Left, Bottom Right
and Custom. With Custom you can modify the position directly in
the schematic by dragging the yellow reference point X with the
mouse cursor.
The default position of the reference point varies with the type of car-
rier.
Parameter Description
– Position Y Only active if you have selected Custom location mode.
Sets a custom y position for the reference point of the template.
Parameter Description
Global Support
Points list
– Set Current Z Sets the current z-value for the selected support points.
for Selected
Support
Points
Properties of Se-
lected Support
Points
– Set Current Z Sets the Z dimension to the current z position of the stage.
Distribute Support
Points
– Columns Sets the number of columns of support points within the template.
– Rows Sets the number of rows of support points within the template.
Parameter Description
– Distribute Distributes the entered number of support points defined in the col-
umn and row input fields within the template. Previously defined sup-
port points will be deleted.
– Distribute One Sets one support point in the center of the selected containers. Previ-
Support Point ously defined support points will be deleted.
for each Se-
lected Con-
tainer
Add by Clicking
This modules enables the acquisition of time series (time-lapse) images with definition of intervals
between images, total acquisition duration and number of time points. Time series can be started
and stopped manually, at fixed times, after a waiting period or by input (trigger) signal, to analyze
images already acquired or change the experiment parameters. The experiment size is limited only
by free space on the hard drive and the images are acquired at maximum possible speed.
Prerequisite ü To set up Time Series experiments, you need to license the Time Series module and activate
it in Tools > Modules Manager. For LSM systems, the Time Series module is part of the
system license.
ü You have set up a new experiment with at least one defined channel and adjusted focus and
exposure time correctly, see also Setting Up a New Experiment [} 51] and Acquiring Multi-
Channel Images with Cameras [} 51].
ü You are on Acquisition tab.
1. In the Acquisition Dimensions section, activate Time Series .
à The Time Series tool is displayed in the Left Tool Area.
2. Open the Time Series tool.
3. Set the Duration of your time series. You are able to select an interval (days, hours, min-
utes, seconds, milliseconds) or cycles (1-n), e.g. 10 cycles.
4. Set the Interval of your time series.
5. Click Start Experiment.
The time series experiment starts. You have successfully set up a time series experiment.
Info
You can display the individual images via the Time slider on the Dimensions tab.
Info
The shortest possible interval is calculated by performing a blind experiment. The camera expo-
sure time, number of steps of a z-stack and the number of acquisition channels are taken into
consideration in the calculation. Depending on the number of z-stacks and channels and
whether long exposure times have been set, it may take some time to calculate the shortest
time interval.
Use time series to acquire an image series consisting of a number of time points. Define, for ex-
ample, the acquisition interval, the length of the experiment and other specifications to control
the experiment.
Parameter Description
Duration Defines the minimum duration of your experiment with the slider or
the input field.
The dropdown list defines the number of cycles or the duration:
As Long as Possi- Activated: The acquisition of the time series continues as long as
ble possible, depending on the free disk space of the hard disk.
§ For hard disks with more than 20 GB free disk space the acqui-
sition always runs until 2 GB disk space is left.
Example: An experiment using a hard disk with 250 GB of free
disk space will run until 2 GB are left. The calculated/required disk
space for the experiment is 248 GB.
§ For hard disks with less than 20 GB free disk space, at least
10% disk space will be always left free.
Example: An experiment using a hard disk with 15 GB of free disk
space will run until 1,5 GB are left. The calculated/required disk
space for the experiment is 13,5 GB.
Parameter Description
Measure Speed Not available for LSM acquisition.
Checks whether the experiment can be performed using the interval
which is set. If the interval is too small, the shortest possible value is
defined automatically for the interval.
Start/Start Next Defines the Start, Stop and Pause conditions of your experiment. Se-
Time Slice/Stop/ lect the parameters for the corresponding condition from the Mode
Pause Begin/Pause dropdown list.
End
– Manual The experiment is started immediately when clicking on the Start Ex-
periment button in the Experiment Manager.
– At Time of The experiment is started, stopped or paused at the entered time. En-
Day ter the desired time in the spin box/input field below the dropdown
list.
– After Delay The experiment is only started, stopped or paused once the length of
time entered has passed.
– On Trigger The experiment is started, stopped or paused once a TTL signal has
been received.
– Interval Displays the interval value which is taken over from the current set in-
terval.
– Trigger Out If necessary, assign a TriggerOUT signal for every set marker.
Note that the TriggerOut signal for the marker function consists of a
high spike followed by a 3 second long TTL pulse. Within the duration
of the pulse a further low spike is delivered.
Parameter Description
– Adds a new marker.
Add
Interactive Adds and configures switches that can be used to execute certain ac-
Switches
tions during your experiment. To add a new switch, click .
The Switch button opens a dialog to configure an interactive switch,
see Interactive Switches Section [} 394].
Info
If you define times as time series acquisition conditions, these apply once for the entire experi-
ment. This also applies to experiments that use the Experiment Designer.
The following functions are only visible if the Show All mode is activated:
You can define the Start, Stop, Pause Begin and Pause End conditions for your experiment.
Start Next Time slice can only be used with trigger functionality of LSM 980. Select the parame-
ters for the corresponding condition from the dropdown list:
Parameter Description
Manual The experiment is started immediately when clicking on the Start Ex-
periment button in the Experiment Manager.
At Time of Day The experiment is started, stopped or paused at the entered time. En-
ter the desired time in the spin box/input field to the right of the
dropdown list.
After Delay The experiment is only started, stopped or paused once the length of
time entered has passed.
On Trigger The experiment is started, stopped or paused once a TTL signal has
been received.
Interval Time Only available for LSM. Use this function to interactively change the
interval during a running time series acquisition. Use the + button to
add a new interval value which is taken over from the current set in-
terval. Add another interval first before changing the current interval.
Highlight the desired interval during the experiment to change the in-
terval before the next image frame is scanned or assign a TriggerIN
signal to change the interval. You can always assign a TriggerOUT
signal for each interval change.
The change of the interval is also marked in the time series acquisi-
tion. The marker is visible next to image (2D view, Gallery view, Mean-
ROI) when the change occurred.
Set Marker Only available for LSM. Use this function to mark events during the
time series acquisition.
Parameter Description
Predefine the marker by typing in the marker name into the editing
box. Either click Set or use TriggerIN to set a marker during time se-
ries acquisition. You can also assign a TriggerOUT signal for every set
marker. The Set button becomes active when the acquisition is run-
ning. The marker is visible next to image (2D view, Gallery view,
MeanROI) when the marker was set.
The following functions are only visible if the Show All mode is activated:
Here you can add and configure switches that can be used to execute certain actions during your
experiment.
Left-click on a new or existing switch to open the dialog in which you can configure the button
parameters:
Parameter Description
Name Here you can enter a name for the button.
Color Activated: Shows a colored line at the left edge of the switch.
Color Selection Opens the Color Selection dialog. Here you can select a color for the
line at the left edge of the switch.
Action Here you can select one of the following actions. This action will be
executed when you click on the button:
§ None
§ Set Interval
§ As Fast as Possible
§ Trigger
§ Hardware Setting
§ Jump to previous block
§ Jump to next block
§ Jump to block #
5.9 Z-Stack
This module enables the acquisition of z-stacks with the aid of a motorized focus drive, where the
z-stack is limited only by the travel range of the system and minimum increments. It also offers
functionality to set the correct increment to satisfy the Nyquist criterion, an integrated z-drive
backlash compensation for maximum precision and the z-stack acquisition at relative or absolute
focus positions in the experiment.
Prerequisite ü You have switched on and configured your microscope system including all components.
ü You have set up a new experiment, at least defined one channel and adjusted focus and ex-
posure time correctly, see Setting Up a New Experiment [} 51].
ü You are on the Acquisition tab.
1. In the Acquisition dimensions section activate the Z-Stack checkbox.
Info
Before you perform automatic configuration, the current focus position has to be at the center
of the sample. The camera's current field of view must always be at a position on the sample
that shows a signal in the selected channel.
Info
Automatic z-stack configuration only works with microscopes and systems that do not use an
optical sectioning technique. If you use an LSM, ApoTome, VivaTome, Spinning Disc (CSU)
or another technique for generating optical sections, the z-stack has to be configured manu-
ally.
1. Make sure that you have placed a sample in the visual field of the camera and that the
sample is roughly in focus. Set the exposure time of the camera fair enough for receiving a
good signal.
2. On the Acquisition tab, in the Z-Stack tool, click Start Auto Configuration.
3. Confirm the system message by clicking OK.
à The automatic configuration starts. The auto configuration can last for a few seconds up
to half a minute depending on the acquisition settings. You can check the status on the
Progress bar in the Status bar.
à The auto configuration sets the focus position for the first, last and center slice of the z-
stack, the number of slices and the interval automatically. The z-stack experiment is set
up successfully now.
4. Click Start Experiment to start the experiment.
You have successfully set up and performed an z-stack experiment.
Info
You can change the area of the sample (z direction in %) covered by the z-stack auto configu-
ration under Tools > Options > Acquisition > Z-Stack. Smaller values enlarge the z-stack,
higher values make the z-stack smaller.
By using this mode, you set the first and the last plane of the z-stack. This mode is suitable if you
do not know the thickness of your sample exactly.
By using this mode, you set the center plane of the z-stack. This mode is suitable if you know the
thickness of your sample. In this case, it will be the fastest method to set up a z-stack.
4. Click Optimal to adjust the number of slices and the best interval according to the Nyquist
criteria. Alternatively, you can set the desired interval and number of slices in the input
fields manually.
à Depending on which option is selected in the Keep section, either the Interval or the
Slices will be held constant.
5. Click Start Experiment to start the experiment.
You have successfully set up and performed a z-stack experiment using the Center mode.
In the Z-Stack tool you can configure acquisitions that comprise several z-planes of your sample.
You can set all the parameters manually using two different modes or have configuration per-
formed automatically.
Info
Automatic z-stack configuration only works with microscopes and systems that do not use an
optical sectioning technique. If you use an LSM, ApoTome, VivaTome, Spinning Disc (CSU)
or another technique for generating optical sections, the z-stack has to be configured manu-
ally.
Parameter Description
First/Last Activated: Displays the controls to configure the z-stack by setting
the first and the last position of the z-stack, see Configuring a Z-
Stack Manually (First/Last Mode) [} 396].
Z-Stack Graphical The graphical display in the left area of the tool represents the config-
Display ured z-stack. In the case of inverse microscopes the objective appears
in stylized form at the bottom of the z-stack. In the case of upright
systems it appears at the top. The blue plane indicates the current
section plane and the values at the top and bottom of the measure-
ment scale on the right-hand side of the graphic indicate the distance
to the center of the z-stack.
- L Marks the last plane of the z-stack. A click on the button changes the
current z-position to this plane.
- C Marks the center plane of the z-stack. A click on the button changes
the current z-position to this plane.
- F Marks the first plane of the z-stack. A click on the button changes the
current z-position to this plane.
- Position Displays the z-position at which the current section plane is located.
Here you can navigate precisely to the relevant z-positions.
Range Displays the range of the configured z-stack from the last to the first
section plane.
Parameter Description
Slices Defines the number of z-slices of the z-stack.
Optimal The number on this button shows the distance calculated for the set
channels and the current microscope according to the Nyquist crite-
ria. If you click on the button, this value is automatically adopted into
the Interval input field.
When you click this button once, it changes its color to blue. This indi-
cates that the system always uses the optimal interval as you continue
to change acquisition parameters. Click the button again or manually
edit the interval to deactivate this permanently active state.
Note: If you change the objective while the button is activated, the
value might not be updated correctly!
For structured illumination microscopy, you can select between
Nyquist sampling and Leap mode. Nyquist sampling gives the opti-
mal sampling rate for highest resolution. In Leap mode only every
third plane of the stack will be imaged, so the stack would be three-
fold under-sampled. However, special SIM algorithms can be used to
restore the information of the skipped planes.
Keep
- Interval Keeps the set interval between the section planes constant if you
change configuration parameters in the tool.
Parameter Description
channel name, the z-slice thickness and the current overlap of two z-
slices in percent are given above the graphic.
- Match Pinhole Changes the pinhole of the tracks to match the set interval.
Hence to go for a specific section thickness identical for all tracks, set
the pinhole of one track to achieve the desired section thickness and
open up the pinhole of all other tracks to maximum. Then click on
Match Pinhole to get the right settings for all tracks.
In case the step interval is subsequently changed to lower or higher
settings, clicking Match Pinhole changes the pinhole size for all
tracks to achieve again 50% overlap between the single steps in each
track.
- Optimal Sets the interval for all tracks to the optimal value. The optimal value
is based on the current smallest available section thickness of the cur-
rent tracks which is defined by the pinhole settings.
Parameter Description
ment.
Note that resetting the focus position in the Focus tool does not af-
fect or shift the absolute position values in the list.
For additional information, see Auto Z Brightness Correction with
LSM [} 400].
- Add Adds the current z-position to the list and stores the currently config-
ured settings. If the position is already in the list, the values are up-
dated. The positions do not need to be added in a certain order.
- Move to Changes the focus position to the selection in the list. This can also be
done by a double click on the list item. If Enable Test is activated,
the parameters are immediately applied during a continuous scan.
- Load../Save.. Loads or saves the stored position parameters to/from a *.ABC file.
- Extrapolate If activated, the interpolation between the Positions in the list can be
extrapolated to the actual first and last slice of a z-stack in case if
those are not part of the defined range of positions.
- Enable Test If activated, the parameters are updated while changing the current z-
position either with the SW focus or with the handwheel of the stand.
This allows to quickly check the parameters during a continuous scan.
The changes of the parameters are not indicated during the actual ex-
periment.
Start Auto Config- Automatically configures the z stack using the current sample, see
uration Configuring a Z-Stack Automatically [} 395].
The following parameters are set automatically:
§ z position of the central plane
§ Distance between the individual planes
§ Number of section planes
The settings of Auto Z Brightness Correction are part of the image acquisition and are reused
with other settings of an image. They are also part of an experiment setting. However, the func-
tion is not activated for reuse or when loading an experiment as the settings apply to the absolute
z-position in µm used when the (previous) image was acquired. If the new stack is acquired in a
different position, using the previously defined settings (extrapolate) can lead to a an extreme
overexposure of the sample. Enable test is always deactivated when an image stack is reused, or
an experiment is loaded.
Worflow:
In order to reuse the Auto Z Brightness parameters for subsequent z-stacks, make sure to manu-
ally set the center (or first or last) position of the first z-stack to zero (Focus TW, Z-position: Set
Zero) before defining the Auto Z parameters. Set the center (or first or last) position of all follow-
ing Z-Stacks also to zero to be able to reuse the Auto Z Brightness parameters accordingly. When
saving and loading the parameters, the same logic applies. When switching between linear and
spline interpolation during continuos scan, the current acquisition parameters are not updated un-
til the z-position is changed.
6 General Toolkits
Toolkit Functionality
2D Toolkit § Image Analysis [} 402]
§ Advanced Processing [} 471]
This module enables you to create automatic measurement routines very easily. The Image Analy-
sis wizard guides you through the steps to create an automatic measurement program. It allows
you to set up even complex measurement tasks easily. The steps of the wizard include image seg-
mentation, object separation and measurement of geometrical or intensity features. After you
have completed the setup, you can apply these settings to the data to be analyzed and obtain
precise measurement results. You can display the results in table and list form and export them to
csv-format.
For more information, see the following examples:
When creating a new analysis setting for your images, you can select the following segmentation
methods:
§ Segment Region Classes Independently: This method allows you to define several classes
and subclasses. With this method, you can define the segmentation algorithm for each class
independently.
§ ZOI (Zones of Influence): This method constructs a zone of influence (ZOIs) and a ring
around each primary object. The primary objects are generated by segmenting the selected
image channel with the selected class segmenter. The ring is defined by its width and distance
from the primary object. The distance from the ZOI border from the ring can be specified. The
ZOI area also incorporates the primary object and ring area.
1. On the Analysis tab, in the Image Analysis tool, click and select New from the
dropdown.
2. In the Settings field, enter a name for your image analysis setting and click Save.
à A new *.czias file is created and saved in the ...\ZEN\Documents\Image Analysis Set-
tings folder.
à You have created a new image analysis setting. As default the method Segment region
classes independently is used.
3. For Method, click .
à The Segmentation Method Selection dialog opens.
4. From the Method drop down menu, select a method and click OK.
à You have created a new image analysis setting using the method of your choice.
5. In the Image Analysis tool, click Edit Image Analysis Setting.
The Image Analysis Wizard opens with the Classes step and includes already a predefined set
of classes, depending on the selected method. Follow the steps in the wizard to define your im-
age analysis. Each method comes with a predefined set of steps which allows you to make all
necessary settings for image analysis.
§ Classes: Allows you to add classes and subclasses.
§ Frame: Allows you to define a measurement frame. Only the area of the frame will be ana-
lyzed.
§ Region Filter: Allows you to define simple or complex conditions to filter the detected ob-
jects according to their parameters.
§ Automatic Segmentation: Allows you to set the parameters for the automatic segmenta-
tion.
§ Interactive Segmentation: Allows you to modify the results of the automatic segmentation
or draw/delete objects. Note: this step only generates relevant results in Start Interactive
Analysis run.
§ Features: Allows you to select measurement features from an extensive list and to define
measurement features for classes and subclasses independently.
§ Statistics: Allows you to define custom statistical features for your regions or objects.
§ Results Preview: Shows a preview of your measurement results for the current view port.
For more information, see the following examples:
§ Measuring Mean Fluorescence Intensity on a Ring around the Primary Object [} 416]
§ Counting the Number of Objects in a Ring around the Nucleus [} 423]
This topic will show you how to set-up a measurement program using the Image Analysis Wiz-
ard. After the setup is successfully completed, the program will be used to measure fluorescence
intensity in a multichannel image.
In this example we are using a multichannel image with 2 channels (1st channel blue, (DAPI) ),
2nd channel red (mRFP1)) of fluorescence-stained cells. First, we detect the blue-stained cell nuclei
in the first channel. Then we measure the fluorescence intensity for both channels for the de-
tected nuclei.
See also
2 Creating a New Image Analysis Setting [} 403]
Prerequisite ü You have created a new image analysis setting with the Segment region classes indepen-
dently method and opened the Image Analysis Wizard, see Creating a New Image Analy-
sis Setting [} 403].
1. In the Classes step, click on Class 2 in the list and enter DAPI Individual Nuclei in the
Name input field.
2. Click on Classes1 in the list and enter DAPI All Nuclei in the Name input field.
3. Click Next.
à The detected nuclei are overlaid in blue. The threshold values are displayed in the
Threshold section in the Lower/Upper input fields.
4. Click on the areas of the blue cell nuclei that have not yet been detected until these have
been completely overlaid.
5. Activate the Fill Holes checkbox.
à This fills any holes in the detected cell nuclei.
6. Select the Watersheds entry from the dropdown list in the Separate section and set the
number to 3.
à Clear separation lines are now visible between the cell nuclei.
7. Click on Next.
6. Click on OK.
à The selected features are displayed in the Region Features section.
7. In the section Annotation Options, activate the checkbox Color.
8. Select Yellow from the drop-down list.
9. Click on Next.
à The object ID and the values for the average fluorescence intensities per channel are dis-
played in the image at the cell nuclei in question and in the data list to the right of the
image.
4. Click on a row in the data list or alternatively on a cell nucleus in the image.
à The row in the data list containing the measurement values is highlighted. The associated
cell nucleus is surrounded by a red rectangle.
There is a direct link between the measured cell nuclei in the image and the measured values in
the data table. You can either click on a measured cell nucleus in the image or on a row in the
data table.
This topic will show you how to set-up a measurement program using the Image Analysis Wiz-
ard. After this the program will be used to count the number of fluorescence spots in a multi-
channel image.
In this example we are using a multichannel image with 2 channels (1st channel blue (DAPI), 2nd
channel green (GFP)) of fluorescence-stained cell nuclei. First, we detect the blue-stained cell nu-
clei in the first channel and then the green stained signals in the second channel. Then we mea-
sure the number of green fluorescence signals per nucleus.
See also
2 Creating a New Image Analysis Setting [} 403]
Prerequisite ü You have created a new image analysis setting with the Segment region classes indepen-
dently method and opened the Image Analysis Wizard, see Creating a New Image Analy-
sis Setting [} 403].
1. In the Classes step, click on Classes1 in the list and enter Nuclei in the Name input field.
2. Select a blue color from the dropdown list in the Color section.
3. Click on the Class 2 entry in the list and enter Individual Nucleus in the Name input field.
11. For Object Color, click Fixed and select the green color from the dropdown list.
à You have now setup a subclass for the signals inside the individual nucleus class (parent
class).
12. Click Next.
5. Select the Watersheds entry from the dropdown list in the Separate section and set the
number to 17.
à Clear separation lines are now visible between the cell nuclei.
4. Remove superfluous features from the list. Select the feature and click on the
Delete button.
5. Click on the Individual Nucleus entry in the list.
13. Remove superfluous features from the list. Select the feature and click on the
Delete button.
14. Click on the Individual Signal entry in the list.
15. Click on the Edit button in the Features of individual regions section.
à The Feature Selection dialog opens.
16. Remove superfluous features (e.g. Area, Perimeter) from the list. Select the feature and click
on the Delete button.
17. Click on the OK button.
à The selected features are displayed in the Regions Features section.
18. Click on Next.
à The ID of the parent, the ID and Signals Count (the number of green signals) of the mea-
sured nuclei is displayed in the data list to the right of the image.
à In the Analysis View you see your image with the measured cell nuclei overlaid in blue
and the signals overlaid in green. Right of this, you see the data list containing the num-
ber of signals per nucleus.
à Only the number of signals of measured nuclei is displayed. Nuclei touching the frame
are not taken into account.
6.1.4 Measuring Mean Fluorescence Intensity on a Ring around the Primary Object
The following example shows how to use the Zone of Influence (ZOI) method to measure intensi-
ties within a ring that is associated to the main object, e.g. the cell nucleus. An application exam-
ple are transport assays where the intensities of a certain fluorescent marker in the cytoplasm are
compared to the intensities within the nucleus.
In this example we use a multichannel image of fluorescence-stained cells. The cell nuclei are
stained with AF568 and the mitochondria are stained with AF488. First, we detect the nuclei in
the AF568-channel as primary object. A zone of influence is generated around each detected pri-
mary object. In this area, we can define a ring and specify its thickness and distance from the
main object. You can use this ring to measure intensities or to detect further sub-objects on it. For
more information, see Counting the Number of Objects in a Ring around the Nucleus [} 423].
See also
2 Creating a New Image Analysis Setting [} 403]
Prerequisite ü You have set up the image analysis setting with the method ZOI (Zones of Influence) and
have opened the Image Analysis Wizard, see Creating a New Image Analysis Setting
[} 403]. This has created the classes ZOIs/ZOI, Primary Objects/Primary Object, and Rings/
Ring by default.
1. If you want to extend the predefined list of classes, click Add Subclass in the Classes step.
à You can find an example how to detect objects on the ring in Counting the Number of
Objects in a Ring around the Nucleus [} 423].
2. If necessary, click Add Class to extend the predefined list by another independent class of
objects.
Note that you cannot add further rings.
3. Select the image channel which you want to use for object detection. In this example, the
primary objects (the nuclei) are in the AF568 channel. Therefore, click on the class Primary
Object and select the channel containing the nuclei.
Parameter Description
ZOIs Class of all zones of influences.
Ring Segment Part of a ring (a ring often consists of only one ring segment).
Optionally, you can define the area to be analyzed of each image. In case there are shading ef-
fects or other reasons that make you want to include only a certain area of each image for analy-
sis, you can define a frame (rectangle, circle or polygon). Only the area within this frame will be
further analyzed.
With the Mode parameter, you can furthermore choose how the analysis treats objects that are
cut by the border of the image or the frame:
1. Select Primary Object and set up suitable parameters to detect the objects, i.e. threshold,
area, separation.
à As soon as objects are detected, the ZOI and Ring are automatically created around each
primary object with the preset parameters.
2. To modify Ring Distance and Width, select the Ring Element class.
à Now, you can define the location and dimension of the ring flexibly. You can set it at the
edge of the main object or inside the main object. You can also define an arbitrary dis-
tance.
3. Define the following parameters:
à Ring Distance: Distance from surface of the primary object. Negative values means that
the ring starts at the defined distance within the primary object. Ring Width: Defines the
width of the ring.
à The ZOI is automatically adapted to exceed the class with the larger diameter, i.e. either
ring or primary object, by at least 3 pixels (default setting).
1. Select ZOI-class, and with the ZOI Width slider, set the distance. You can set the distance
between the outer border of the ZOI and the outer border of either ring or primary object,
respectively. The ZOI Width is at least 3 pixels larger than either the ring or the primary ob-
ject, whichever is larger. The ZOI area incorporates also the area of the ring and the primary
object, and thus can serve for example for measuring features over the complete cell.
You can define conditions for the primary objects (and additionally defined subclasses) to be mea-
sured, e.g. include only objects of a certain size, shape, intensity or other parameters. You can de-
fine suitable parameters for each of the defined objects.
1. Select the Primary Object and click Edit. From the list of features on the right, you can
add features via double-click. Once you have added all desired features, click OK.
à The selected conditions appear in the left tool area.
2. Set the minimum and maximum values by clicking on the objects with the desired features,
or by entering the numbers directly.
The following figure is an example and shows the result if a certain condition of the circularity of
each primary object needs to be fulfilled.
Fig. 23: Region Filter based on the circularity of the primary object
You can define individual measurement features for each class. You can copy the measurement
features defined for one class to the other classes via Copy to all.
1. Select the class for which you want to define measurement features, and click Edit. From
the list of features on the right you can add features to the selected features list on the left.
à These features are automatically calculated for every object during image analysis. All
classes have the ID of the parent and ID as default features. With these IDs you can later
group the associated parameters from the result excel lists, if necessary.
à The class Ring additionally has Area and Count as default parameters.
à To attribute the mean intensity for channel AF488 (the mitochondria) measured on the
Ring to the Primary Object, select Ring and click Edit. The Feature Selection window
opens.
2. From the feature list on the right select Intensity Mean Value of channel AF488 and
add it to the selected features on the left. In the Copy column, from the drop-down menu
1. Click on the different objects in the Analysis tab to get the preliminary measurement result
for all objects.
2. Click Finish, to save the analysis settings and close the wizard.
à The wizard closes. The analysis settings are saved.
You have the following options to run a predefined image analysis setting on your data set:
§ Start Interactive Analysis: Analyze interactively with all steps that have been selected with
the checkbox Interactive during setup of the image analysis.
§ Start Analysis: Runs the image analysis setting without dialog.
When the analysis is finished, the main view switches to the Analysis tab and displays the seg-
mented image along with the results of the analysis.
Select the different objects to display the corresponding measurement tables. The data in the ta-
bles and the regions in the image are interlinked. A click on the object in the image highlights the
corresponding line in the data table and vice versa.
This example is similar to Measuring Mean Fluorescence Intensity on a Ring around the pri-
mary Object [} 416] and also uses the same data. This example shows how to count the number
of objects on a ring that is associated with the main object, e.g. the cell nucleus. The images are
taken from AF568 stained nuclei. The mitochondria are stained with AF488. The channel of the
nuclei is used for image segmentation. The ZOI-segmentation method attributes a zone of influ-
ence (ZOI) and a ring to each detected nucleus. This area is used as a search range to detect sub-
objects, in this case the mitochondria.
See also
2 Creating a New Image Analysis Setting [} 403]
Prerequisite ü You have set up the image analysis setting with the method ZOI (Zones of Influence) and
have opened the Image Analysis Wizard, see Creating a New Image Analysis Setting
[} 403]
1. In the Classes step, select Ring or Ring Element and click Add Subclass to extend the
predefined set of classes with a subclass of the Ring Element.
à Another class below the Ring Element is added.
2. Give this class a meaningful name, e.g. Mitochondria/Mitochondrion.
3. The cell nuclei (primary objects) are stained with AF568 (red channel), therefore you need
to select this channel to segment the cell nuclei. Select Primary Object and for Channel
select AF568.
See also
2 Measuring Mean Fluorescence Intensity on a Ring around the Primary Object [} 416]
You can define individual measurement features for each class. The measurement features de-
fined for one class you can copy to the other classes via Copy to all.
1. Select the class for which you want to define measurement features, and click Edit. From
the list of features on the right you can add features to the selected features list on the left.
à These features are automatically defined for every object during image analysis. All
classes have ID of the parent and ID as default features. This allows you to later to
group the associated parameters from the excel lists, if necessary.
à The class Ring additionally has Area and Count as default parameters.
2. In this example, we count the number of objects (the number of mitochondria fragments)
within each ring. To attribute the number of mitochondria fragments to the Primary Ob-
ject, select Mitochondria and click Edit.
à The Feature Selection dialog opens.
3. In the parameter list on the right, select the feature Count. In the Copy drop-down menu,
click .
à This measurement feature is copied to the corresponding Primary Object.
1. Click on the different objects in the Analysis tab to get the preliminary measurement result
for all objects.
2. Select the Primary Object to get a list with preliminary measurement results with the fea-
tures ID of the parent, ID, Area, and Ring Mitochondria Counts. Note that the results are
preliminary and only include the part of the image you see in the viewport.
à The result of the image analysis shows Ring Mitochondria Counts as a feature of Pri-
mary Object.
3. In the table, click on column Ring Mitochondria Counts to sort the entries in increasing
or decreasing order.
4. Click on Finish to save the analysis settings and to close the wizard.
You can now run the analysis as described in Measuring Mean Fluorescence Intensity on a Ring
around the Primary Object [} 416].
1. Click on the Create Measurement Data Table button on the Analysis tab.
à The two data lists are now separate documents.
2. Save each of the data lists via the File menu > Save As. Allocate a name and select .csv as
the file type.
à The measurement data tables are saved in CSV format and can therefore be opened di-
rectly in Excel.
3. Click on the image and save it via the File menu > Save As. Allocate a name and select .czi
as the file type.
The image is saved with the measurement results. If you open the image, the measurement re-
sults can be viewed in the Analysis View.
It is possible to create or extract an image analysis setting from an image which has already been
analyzed. This allows you to ensure that a new data set is analyzed exactly in the same way as a
previously analyzed data set.
The Image Analysis tool allows you to perform an analysis interactively. It runs the selected anal-
ysis setting with all the steps that have been marked as interactive in the setup. Steps that you
have not marked as interactive in the Image Analysis Wizard are run with the values predefined
in the image analysis setting. The setting does not pause to allow you to change these values in-
teractively.
Start Interactive Analysis also allows you to directly execute an image analysis on a czi image
without predefining an image analysis setting. For that you need to have an image analysis setting
where all steps are marked as interactive. Then it is possible to modify every step of the Image
Analysis Wizard during the interactive analysis and do a one-time image analysis on the dataset
without creating a new setting. In order to retrieve an image analysis setting from an already ana-
lyzed dataset, see Creating an Image Analysis Setting from an Analyzed Image [} 426].
Note: When you analyze an image interactively, the modifications of the settings during the inter-
active analysis are not saved.
Prerequisite ü You have defined an image analysis setting where all analysis steps you want to adjust inter-
actively are marked as interactive.
1. On the Analysis tab, open the Image Analysis tool.
2. For Setting, select your image analysis setting.
3. Click on Start Interactive Analysis.
à The Image Analysis Wizard opens with all the steps that are defined as interactive in
the setting.
4. Modify your settings for each step and click Next to get to the following step in the wiz-
ard.
5. At the end, click Finish to close the wizard.
You have now analyzed your image interactively and the results of this analysis are displayed.
Prerequisite ü You are in the Features step of the image analysis wizard.
1. Select a class in the list and click Custom Feature.
à The Custom Feature Editor opens. All already defined features are displayed in the list,
or on initial opening an empty default entry is already created.
2. In the Custom Features list, click to add a new entry. Alternatively, if no feature has
been defined yet, select the automatically displayed default entry.
à A new entry is added to the list.
3. Under Define Custom Feature, define the Name for your feature and optionally specify a
Unit, if applicable.
4. In the Define Operands list, click to add a new operand. Alternatively, if no feature
has been defined yet, select the automatically displayed default entry
à A new operand entry is created.
5. Select the Class which is used to generate the operand.
6. In the Features dropdown list, select the measurement feature that you want to use to de-
fine the operand.
à The selected class and measurement feature are displayed as Expression.
7. Repeat the previous steps to define all operands you need to calculate your custom feature.
à All defined operands are displayed in the Define Operands list.
8. Under Define Custom Expression, enter your operands and use the mathematical opera-
tors to define the calculation for your custom feature, e.g. 100*(a/b+Math.Pow(c,2)).
9. Click Verify Expression.
à The syntax of your expression is checked and verified. In case the expression is not valid,
an error message is displayed.
10. Repeat this whole workflow to create all custom features required for your image analysis.
à All created features are displayed in the Custom Features list of the respective class.
11. Click OK.
à The editor closes and saves the defined custom features. They are displayed in the list of
the Features step of the wizard.
à After analyzing an image with the setting, the custom features are displayed in the result
table of the respective class and are also available for the charts in the Analysis view,
just as for any other features.
See also
2 Examples for Custom Features [} 438]
Prerequisite ü You are in the Statistics step of the image analysis wizard.
1. Select a class in the list and click Define Custom Feature.
à The Custom Statistic Feature Editor opens. All already defined features are displayed
in the list, or on initial opening an empty default entry is already created.
2. In the Custom Features list, click to add a new entry. Alternatively, if no feature has
been defined yet, select the automatically displayed default entry.
à A new entry is added to the list.
3. Under Define Custom Feature, define the Name for your feature and optionally specify a
Unit, if applicable.
4. In the Define Operands list, click to add a new operand. Alternatively, if no feature
has been defined yet, select the automatically displayed default entry
à A new operand entry is created.
5. Select the Class which is used to generate the operand.
6. In the Features dropdown list, select the measurement feature that you want to use to de-
fine the operand.
7. Select the Statistical Operation the operand is used for.
à The selected class and measurement feature are displayed as Expression.
8. Repeat the previous steps to define all operands you need to calculate your custom statisti-
cal feature.
à All defined operands are displayed in the Define Operands list.
9. Under Define Custom Expression, enter your operands and click on the mathematical op-
erators to define the calculation for your custom statistical feature, e.g.
100*(a/b+Math.Pow(c,2)).
10. Click Verify Expression.
à The syntax of your expression is checked and verified. In case the expression is not valid,
an error message is displayed.
11. Repeat this whole workflow to create all custom statistical features required for the image
analysis.
à All created features are displayed in the Custom Features list of the respective class.
12. Click OK.
à The editor closes and saves the defined custom features. They are displayed in the list of
the Statistics step of the analysis wizard.
See also
2 Examples for Custom Statistical Features [} 439]
Once you have downloaded an AI model, you can use it for image analysis by creating an image
analysis setting.
Prerequisite ü You have downloaded or imported an AI model trained on one or multiple channels, see
Downloading AI Models [} 71] or Importing AI Models [} 72].
1. On the Applications tab, open the AI Model Store tool and click Open AI Model Store.
Alternatively, click for a model in the Available Models list and select Create Anal-
ysis Setting.
à The AI Model Store dialog opens to display all available AI models.
2. Select the model in the list.
à The Properties section on the right displays more detailed information about your se-
lected model.
3. In the Properties section, click Create Analysis Setting.
à A file browser opens.
4. Enter a name for the setting and click Save.
à The setting is saved as *.czias file in the respective folder.
You can now use the setting for image analysis in ZEN and select it in the Image Analysis tool.
Note that some parameters are pre-defined in this setting based on the model and cannot be
changed, e.g. the number of classes and the segmentation method. If you need to create a more
complex hierarchy level of classes (e.g. define sub-classes or Zone-of influence), set up an ordinary
image analysis setting. In the Automatic Segmentation step of the setup, you can then select
models trained on single channel images for the segmentation of the individual classes.
See also
2 AI Model Store Tool [} 888]
Info
4 To highlight the row of the table containing the measured values of an object, click on a
segmented object in the image or in the chart. To highlight multiple rows, press Ctrl and
click on multiple object/ data points.
4 To highlight the corresponding segmented object in the image, click on a row in the table
or on the data point in the chart. To highlight multiple objects, press Ctrl and click on mul-
tiple rows/ data points.
4 To highlight the measured value of an object in the scatter chart or in the histogram, click
on one or more rows in the table. The corresponding data point in the chart turns red. To
change the chart type, on the Custom Chart tab, click on the corresponding Chart Type
button. To highlight multiple objects, press Ctrl and click on multiple rows/ objects.
You can also move your stage in the Analysis view if you click on the Stage button in the Di-
mensions tab and then on a position in the image. The stage is displayed as a red crosshair. This
allows you to move your stage to particular points of interest which your analysis detected but
would have been very hard to identify in the original image in the 2D view.
CAUTION
Risk of Crushing Fingers
The drive of a microscope stage with a motorized horizontal stage axis (stage drive) is strong
enough to crush fingers or objects between the stage and nearby objects (e.g. a wall).
4 Remove your fingers or any objects from the danger area before moving the stage drive.
4 Release the joystick immediately to stop the movement.
See also
2 Charts and Tables of the Analysis View [} 434]
On the Analysis tab you can define how the measured objects are displayed in an image.
Parameter Description
Show Objects Activated: Displays the measured objects in the graphics plane.
Opacity Here you can set the opacity with which the measured objects are dis-
played in the graphics plane.
Delete Measure- Deletes all objects and measurement data from the image.
ment Data
Parameter Description
- All classes Create individual data tables for all classes.
(separately)
Classes section In the classes section, you can select the class for which the measure-
ment features should be displayed in the data table. For each class
there are two entries: the first entry concerns all the objects belonging
to the class (field features) and the second represents an individual
object (object features).
Object Classifica- Only visible, if you have executed an object classification (see Using a
tion Trained Model for Object Classification [} 523]).
Displays a table with the predicted objects for a specific class.
Here you can export the currently displayed custom chart as an image (with the file format PNG,
TIF, BMP, or JPG).
Parameter Description
Screen Resolution Exports the chart with a resolution suitable for screens.
(96ppi)
Printing Resolution Exports the chart with a resolution suitable for printing.
(300ppi)
Transparent Back- Activated: The chart is exported with a transparent background. Pos-
ground sible file formats for this kind of export are PNG and TIF.
Export Opens a dialog to select the folder and path for export.
On this tab you select the data which is shown in the multiple scenes chart of the Analysis view.
Filled wells which contain objects (images) are displayed in yellow color, unfilled/ empty wells are
transparent.
See also
2 Selecting Data for the Multiple Scenes Chart [} 436]
This tab allows you to export the currently opened table. The table is exported in csv format.
Parameter Description
Export Table Opens an explorer to select the location for export.
You can display a histogram or visualize the relationship between two quantitative variables by
scatter chart and adapt the zoom factor to optimize the display. Draw a rectangle into the his-
togram/scatter chart to zoom in. Double click into the histogram/scatter chart to zoom out.
Parameter Description
Enable chart Activated: Displays the chart in the Analysis View.
Deactivated: Displays no chart in the Analysis View. Deactivates all
controls described below to edit the chart.
Chart Type
X-Axis Selects the feature that is displayed on the x-axis of the chart.
The elements displayed in the dropdown menu depend on the previ-
ously defined measurement features in the image analysis setting.
Adapt Zoom Fac- Activated: Automatically zooms the chart when selecting items from
tor the data table or the image.
Parameter Description
Time Series Only available for time series images.
Activated: Displays the analysis results of the selected class in a time
series chart.
Heatmap Only available for experiments with multi well/multi chamber plates.
Activated: Displays a heatmap of the well plate.
See also
2 Analysis View [} 429]
In the feature selection dialogs for individual or all regions, you can define where a selected fea-
ture is copied to. The following copy operations are available:
§ Copies the selected feature also in the result table of CLASSES in the next higher hi-
erarchy level.
§ Copies the selected feature also in the data table of CLASS in the next higher hierar-
chy level.
§ Copies the selected feature also to the data table of the first CLASSES element of the
same hierarchy level.
§ Copies the selected feature also to the data table of the first CLASS element of the
same hierarchy level.
§ Copies the selected feature also in the data table of the first CLASSES element on the
next higher hierarchy level.
§ Copies the selected feature also in the data table of the first CLASS element on the
next higher hierarchy level.
The Analysis View shows different charts and tables depending on the analysis, the experiment,
and the selected options in the Custom Chart tab [} 432]. The axes of the charts are configured
in the Custom Chart tab and the shown analysis results are always those of the class selected in
the Analysis tab. An image of the charts can be exported with the Chart Export Tab [} 431]. You
can zoom into a chart by using the mouse wheel or drawing a zoom rectangle. To zoom out to
the original view, double-click on the background of the plot.
Info
4 To highlight the row of the table containing the measured values of an object, click on a
segmented object in the image or in the chart. To highlight multiple rows, press Ctrl on
your keyboard and click on multiple object/data points.
4 To highlight the corresponding segmented object in the image, click on a row in the table
or on the data point in the chart. To highlight multiple objects, press Ctrl on your key-
board and click on multiple rows/data points.
4 To highlight the measured value of an object in the scatter chart or in the histogram, click
on one or more rows in the table. The corresponding data point in the chart turns red. To
change the chart type, on the Custom Chart tab, click on the corresponding Chart Type
button. To highlight multiple objects, press Ctrl on your keyboard and click on multiple
rows/objects.
Heatmap
For multi well or multi chamber experiments, the Analysis View offers a heatmap to display the
measurement results on a well or chamber level. To see this map, activate the checkbox
Heatmap in the Custom Chart tab.
Prerequisite ü In the Custom Chart tab, the Multiple Scenes checkbox must be activated.
1. Click on the Sample Carrier tab.
2. Select the desired well or scene. To select multiple wells or scenes, press Ctrl on your key-
board and click on the individual wells. Alternatively, select the desired wells with the
pressed left mouse key. To select a specific region of wells, press Shift on your keyboard
and click on the two wells which mark the corners of the region.
The chart displays the data of the selected wells or scenes.
See also
2 Custom Chart tab [} 432]
2 Sample Carrier Tab [} 432]
The heatmap displayed in the Analysis View is calculated based on statistical features which you
add in the Features step of the Image Analysis Wizard.
The statistical measurement is done on the basis of the top class objects of your analysis setting.
Consider the following example:
To calculate, for example, the Mean Area of the Classes 1, the area of all Class 2 objects is
summed up and then the total area is divided by the count of objects.
This table shows the offered statistical parameters and the calculation description:
Parameter Description
Area Calculates the total area of all objects found in a well or chamber.
Area Percentage Calculates the percentage that the area of all objects has in respect to
the whole frame area (area of all objects divided by the whole frame
area).
Intensity Std Calculates the intensity standard deviation divided by the number of
objects.
Mean Intensity Calculates the mean value of the mean of the intensity and divides it
by the number all objects in all scenes.
Mean Area Calculates the mean value of the mean of the area and divides it by
the number of all objects in all scenes.
Parameter Description
§ os = Number of objects found in the scene s
§ xi = Mean area of the object i
The following examples illustrate the functionality of creating custom features in the image analy-
sis wizard, see also Creating Custom Features [} 427].
This custom feature calculates the number of spots in each cell and is based on an analysis setup
for an image with cells (DAPI) and spots (EGFP). The class tree could look like this:
To create this custom feature you have to define the following operand and expression for the
class Cell:
§ Operand a:
– Class: All Spots per Cell
– Feature: Count
§ Define Custom Expression: a
The subsequent run of the analysis setting will add your custom feature as another column to the
analysis table for the class Cell.
This custom feature calculates the average area per spot in a cell by dividing the area of the cell
through the number of spots in that cell. It is based on an analysis setup for an image with cells
and spots. The class tree could look like this:
To create this custom feature you have to define the following operands and expression for the
class Cell:
§ Operand a:
– Class: Cell
– Feature: Area
§ Operand b:
– Class: All Spots per Cell
– Feature: Count
§ Define Custom Expression: a/b
The following examples illustrate the functionality of creating custom statistical features in the im-
age analysis wizard, see also Creating Custom Statistical Features [} 428].
This custom statistical feature calculates the percentage of positive cells. For this, your analysis
setting needs to have two classes, one for all cells and one for positive cells, for example:
Use identical segmentation parameters for both classes. In the Region Filter step you need to
add a filter to define the positive cells (e.g. on mean intensity) and set the filter to include only
positive cells. In the Statistics step, you then have to create this custom statistical feature with
the following operands and expression:
§ Operand a:
– Class: Red Positive Cell
– Feature: None
– Statistical Operation: Count
§ Operand b:
– Class: All Cells
– Feature: None
– Statistical Operation: Count
§ Define Custom Expression: a/b*100
After running the analysis setting containing this custom statistical feature, select the Base of the
classes tree in the Analysis view tab and then you can display the percentage of red positive cells
in the heatmap plot.
This custom statistical feature calculates the mean translocation ratio of cells in a well. For this,
your analysis setting has to use the method ZOI (Zones of Influence) and needs to be set up to
detect cell nuclei with a definition of an appropriate width for the ring to cover the cytoplasm
near the nucleus.
In the Statistics step, define the feature Translocation Ratio and calculate the mean intensity in
the second channel in the nuclei divided by the mean intensity in the same channel in the ring.
Therefore, define the following operands and expression:
§ Operand a:
– Class: Primary Object
– Feature: Intensity Mean Value of Channel 'EGFP'
– Statistical Operation: Mean
§ Operand b:
– Class: Ring Element
– Feature: Intensity Mean Value of Channel 'EGFP'
– Statistical Operation: Mean
§ Define Custom Expression: a/b
Info
Coordinate System
Some feature descriptions contain images with a coordinate system for illustrative purposes.
Note that the actual coordinate system in the software is different and has its point of origin
(0/0) in the top left corner of the image.
The software can automatically detect and measure various properties of objects.
Some general terms relevant for several feature descriptions are the following:
§ Filled: A measurement feature with Filled in its name takes the entire region for the respec-
tive calculation, i.e. any holes the region might contain are included in the calculation.
§ Unscaled: For features that are titled as Unscaled, the scaling of the image is not taken into
account for the measurement. The values returned by these features have the unit pixel.
§ Unit: pixels
§ Value range: 1 ... image size in x-direction
§ Unit: pixels
§ Value range: 1 ... image size in x-direction
§ Unit: pixels
§ Value range: 1 ... image size in y-direction
§ Unit: pixels
§ Value range: 1 ... world coordinate size in y-direction
6.1.17.5 Area
Area of a region.
Area of a region excluding any holes it may contain. The areas of the holes are not included in the
measurement. If you want to include them, use the Area filled parameter.
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm²)
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm²)
Area of a region including the parts cut by a frame. Note that if you define Cut at Frame in the
Frame step of the image analysis, the value of this parameter is identical to the value delivered by
the Area feature.
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm²)
Unscaled area of a region including the parts cut by a frame. Note that if you define Cut at
Frame in the Frame step of the image analysis, the value of this parameter is identical to the
value delivered by the Area Unscaled feature.
§ Unit: pixels
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm²)
§ Unit: pixels²
Returns the area of the measurement frame used in the image. If a ROI is defined in the Frame
step of the wizard, this area is displayed, otherwise the area of the entire image is indicated. This
also includes image areas without pixel values.
§ Unit: µm2
Returns the area of the measurement frame used in the image. If a ROI is defined in the Frame
step of the wizard, this area is displayed, otherwise the area of the entire image is indicated. This
also includes image areas without pixel values.
§ Unit: pixels
The percentage of the area of all regions in relation to the area of all frames. This also includes
image areas without pixel values.
The percentage of the area of all regions including regions that are cut by the frame in relation to
the area of all frames. This also includes image areas without pixel values.
Area of the holes as a percentage of the overall area, i.e. the area of holes divided by the area of
the segmented region.
§ Unit: pixels²
Indicates the back coordinate (highest value in z-direction) of the bounding box of a region. The
box is drawn in parallel to the x, y and z axis.
Indicates the back coordinate (highest value in z-direction) of the unscaled bounding box of a re-
gion. The box is drawn in parallel to the x, y and z axis.
§ Unit: pixels
Indicates the back coordinate (highest value in z-direction) of the unscaled bounding box of a re-
gion in the world coordinate system (WCS). The box is drawn in parallel to the x, y and z axis.
§ Unit: pixels
Indicates the back coordinate (highest value in z-direction) of the bounding box of a region in the
world coordinate system (WCS). The box is drawn in parallel to the x, y and z axis.
§ Unit: pixels
§ Unit: pixels
§ Unit: pixels
§ Unit: pixels
Indicates the depth of the bounding box of a region, i.e. the "length" of the bounding box in z.
Indicates the depth of the unscaled bounding box of a region, i.e. the "length" of the bounding
box in z.
§ Unit: pixels
Indicates the front coordinate (smallest value in z-direction) of the bounding box of a region. The
box is drawn in parallel to the x, y and z axis.
Indicates the front coordinate (smallest value in z-direction) of the unscaled bounding box of a re-
gion. The box is drawn in parallel to the x, y and z axis.
§ Unit: pixels
Indicates the front coordinate (smallest value in z-direction) of the unscaled bounding box of a re-
gion in the world coordinate system (WCS). The box is drawn in parallel to the x, y and z axis.
§ Unit: pixels
Indicates the front coordinate (smallest value in z-direction) of the bounding box of a region in the
world coordinate system (WCS). The box is drawn in parallel to the x, y and z axis.
Indicates the height (size in y-direction) of a bounding box for a region. The box is drawn in paral-
lel to the x and y axis.
Indicates the height (size in y-direction) of a bounding box for a region. The box is drawn in paral-
lel to the x and y axis.
§ Unit: pixels
§ Formula: Bound top - Bound bottom
§ Unit: pixels
§ Unit: pixels
§ Unit: pixels
§ Unit: pixels
Indicates the x coordinate in the world coordinate system (WCS) of the right-hand edge of a
bounding box for a region. The box is drawn in parallel to the x and y axis.
§ Unit: pixels
§ Unit: pixels
Indicates the width (size in x-direction) of a bounding box for a region. The box is drawn in paral-
lel to the x and y axis.
Indicates the width (size in x-direction) of a bounding box for a region. The box is drawn in paral-
lel to the x and y axis.
6.1.17.51 Center X
§ Unit: pixels
The x coordinate in the world coordinate system (WCS) of the geometric center of gravity of a re-
gion.
Depending on the shape of the object, this point may also lie outside a region. The associated y-
coordinate is determined via the Center Y parameter.
§ Unit: pixels
The x coordinate in the world coordinate system (WCS) of the geometric center of gravity of a re-
gion.
Depending on the shape of the object, this point may also lie outside a region. The associated y-
coordinate is determined via the Center Y parameter.
6.1.17.55 Center Y
Depending on the shape of the object, this point may also lie outside a region. The associated x
coordinate is determined via the Center X parameter.
§ Unit: pixels
The y coordinate in the world coordinate system (WCS) of the geometric center of gravity of a re-
gion.
Depending on the shape of the object, this point may also lie outside a region. The associated x
coordinate is determined via the Center X parameter.
§ Unit: pixels
The y coordinate in the world coordinate system (WCS) of the geometric center of gravity of a re-
gion.
Depending on the shape of the object, this point may also lie outside a region. The associated x
coordinate is determined via the Center X parameter.
6.1.17.59 Center Z
§ Unit: pixels
Indicates the z coordinate of the unscaled geometric center of gravity of a region in the world co-
ordinate system (WCS).
Depending on the shape of the object, this point can also lie outside a region. The associated x
and y coordinates are determined by the Center X Unscaled WCS and Center Y Unscaled WCS
features.
§ Unit: pixels
Indicates the z coordinate of the geometric center of gravity of a region in the world coordinate
system (WCS).
Depending on the shape of the object, this point can also lie outside a region. The associated x
and y coordinates are determined by the Center X WCS and Center Y WCS features.
6.1.17.63 Circularity
6.1.17.64 Compactness
6.1.17.65 Convexity
6.1.17.66 Count
6.1.17.69 Diameter
§ Unit: degrees
§ Value range: 0 ... 180°
The major axis of an ellipse with the same geometric moment of inertia as the current region is
determined in accordance with the Ellipse major parameter. The angle to the x-axis is then de-
termined. The indication of the angle always relates to a counterclockwise direction.
§ Unit: degrees
§ Value range: 0 ... 180°
This tool uses unscaled pixels for calculating the angle. The results may differ from the results of
Ellipse Angle.
§ Unit: pixels
§ Unit: pixels
Calculates the length of the semi-major axis of the equivalent ellipsoid of the region.
Calculates the length of the semi-major axis of the equivalent unscaled ellipsoid of the region.
§ Unit: pixels
Calculates the length of the semi-mean axis of the equivalent ellipsoid of the region, i.e. half the
length of the medium/middle axis of the three dimensional ellipsoid.
Calculates the length of the semi-mean axis of the equivalent unscaled ellipsoid of the region, i.e.
half the length of the medium/middle axis of the three dimensional ellipsoid.
§ Unit: pixels
Calculates the length of the semi-minor axis of the equivalent ellipsoid of the region.
Calculates the length of the semi-minor axis of the equivalent unscaled ellipsoid of the region.
§ Unit: pixels
§ Unit: degrees
§ Value range: 0 ... 180°
§ Unit: pixels
The minimum feret of a region is determined on the basis of distance measurements. Two straight
lines are positioned on opposite sides of the object, like a sliding caliper, at 128 angle positions.
The corresponding distance is measured for each angle position. The minimum value determined
is the minimum feret.
§ Unit: degrees
§ Value range: 0 ... 180°
§ Unit: pixels
a:b
Feret ratio.
The ratio of Feret Minimum to Feret Maximum is calculated. This ratio makes it possible to
make statements on the form of the measured objects. If the feret ratio has a low value, long,
elongated objects are present. Values approaching 1 indicate the presence of compact or circular
objects, as in this case Feret Minimum and Feret Maximum have very similar values. The Form
Circle is also suitable for making statements on the circularity of an object.
6.1.17.94 ID
Returns the ID of the segmented object that is saved in the object store.
Returns the time stamp (date and time) of the image acquisition. This value is not equivalent to
the time stamp of the image file, i.e. it does not change when saving, copying, re-saving the im-
age files. It is formatted according to the settings of the operating system, e.g. mm/dd/yyyy would
be typical in US based systems, whereas dd/mm/yyyy would be typical in EU based systems.
The percentage of the area of all regions in relation to the image area of all frames. This only in-
corporates image areas which contain pixel values.
Returns the container category (e.g. sample, control) of the current scene (well).
Indicates the position of the z-motor (in µm) at the time of image acquisition.
Returns the area of the measurement frame used in the image. If a ROI is defined in the Frame
step of the wizard, this area is displayed, otherwise the area of the entire image is indicated. This
only incorporates image areas which contain pixel values.
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm2)
Returns the unscaled area of the measurement frame used in the image. If a ROI is defined in the
Frame step of the wizard, this area is displayed, otherwise the area of the entire image is indi-
cated. This only incorporates image areas which contain pixel values.
§ Unit: pixels
Returns the total distance across the y-axis of the image, i.e. the height of the image.
§ Unit: µm
Returns the total distance across the y-axis of the image, i.e. the height of the image.
§ Unit: pixels
§ Restriction: This value is only available for images that have previously been acquired with Ax-
ioVision or saved in AxioVision ZVI format.
Returns the time point (frame) in which the object was segmented.
Returns the total distance across the x-axis of the image, i.e. the width of the image.
§ Unit: µm
Returns the total distance across the x-axis of the image, i.e. the width of the image.
§ Unit: pixels
6.1.17.115 Index
A running number of the object, in the order it was measured on the image. This is different from
ID, which is a global identifier in the sense that it takes also into account the ID of the Classes Col-
lections (i.e. the topmost "Classes" has the ID 1, and the "Base" has ID 0, and therefore the first
detected object of the first class starts with 2). The index instead starts at 1 for every region, refer-
ring to the order of appearance of an object.
The difference between the pixel value of the brightest and darkest pixels in the object, i.e. Inten-
sity Maximum of channel "C1"-Intensity Minimum of channel "C1".
The standard deviation of the brightness (pixel value) of the pixels in the object.
6.1.17.122 Perimeter
Perimeter of a region.
This parameter is specially optimized for measuring the perimeters of circles. If the measured re-
gion contains holes, the total perimeter including the perimeters of the hole structures is deter-
mined. If you only want the perimeter of the outside contour to be determined, use the Perime-
ter filled parameter.
Perimeter of a region.
This parameter is specially optimized for measuring the perimeters of circles. If the measured re-
gion contains holes, the total perimeter including the perimeters of the hole structures is deter-
mined. If you only want the perimeter of the outside contour to be determined, use the Perime-
ter filled parameter.
§ Unit: pixels
6.1.17.124 Radius
The object is measured using the Area feature. A circle with the same area as the object is cre-
ated. The radius of this circle is returned.
In case of a three dimensional analysis, the equivalent sphere with the Volume of the region is
measured and the radius of the sphere is returned.
Returns the ID of the region class which is assigned to the object according to the class tree.
Returns the name of the region class to which the object is assigned and that was defined in the
Classes step of the wizard.
Returns the name of the parent class to which the object is assigned and that was defined in the
Classes step of the wizard.
6.1.17.130 Roundness
Calculates a value for the roundness (between 0 and 1) of the region based on the Area and the
Feret Maximum (FeretMax) features. In case of a three dimensional analysis, the roundness value
is calculated based on the Volume feature of the region.
6.1.17.131 Sphericity
Calculates the surface area of a region excluding any holes it may contain. The areas of the holes
are not included in the measurement. If you want to include them, use the Surface Area Filled
feature.
§ Unit: Based on the unit of the scaling assigned to the image (e.g. μm2)
Calculates the surface area of a region including any holes it contains. The holes are interpreted as
if they belong to the region or are filled prior to the measurement. If you do not want the holes to
be measured, use the Surface Area feature.
§ Unit: Based on the unit of the scaling assigned to the image (e.g. μm2)
Calculates the surface area of an unscaled region including any holes it contains. The holes are in-
terpreted as if they belong to the region or are filled prior to the measurement. If you do not
want the holes to be measured, use the Surface Area Unscaled feature.
§ Unit: pixels2
Calculates the surface area of an unscaled region excluding any holes it may contain. The areas of
the holes are not included in the measurement. If you want to include them, use the Surface
Area Filled Unscaled feature.
§ Unit: pixels2
§ Unit: Unit of area of the scaling assigned to the image (e.g. μm²)
The area of all regions of the direct subclasses in relation to the area of the "parent" class, i.e. the
area of all regions of the subclass divided by the area of the "parent" class.
Note that this feature only considers values of direct subclasses of a class, i.e. subclasses that are
directly on the next lower hierarchy level in the class list. Values of subclasses on lower hierarchy
levels ("subclass of a subclass") are not considered.
§ Unit: pixels
6.1.17.139 Volume
§ Unit: Based on the unit of the scaling assigned to the image (e.g. μm3)
Calculates the volume of the measurement frame used in the image. If a ROI is defined in the
Frame step of the wizard, this area is displayed, otherwise the volume of the entire image area is
returned.
§ Unit: Based on the unit of the scaling assigned to the image (e.g. μm3)
Calculates the unscaled volume of the measurement frame used in the image. If a ROI is defined
in the Frame step of the wizard, this area is displayed, otherwise the volume of the entire image
area is returned.
§ Unit: pixels3
Calculates the percentage of the volume of the regions according to the frame volume.
§ Unit: pixels3
6.2.1 Edges
This method performs a gradient filtering. Based on the sum of a 2 x 2 matrix in the X-and Y-di-
rection, a gradient image is calculated and using the larger of the two components. The edges are
darker than that of the method Gradient Sum.
This method performs a gradient filtering. Based on the sum of a 2 x 2 matrix in the X-and Y-di-
rection, a gradient image is calculated. The edges are brighter than that of the method Gradient
Max.
6.2.1.3 Highpass
This method performs high-pass filtering. The high pass filter is defined as the difference between
the original image and the low-pass filtered original.
Parameter Description
Normalization Depending on the image processing function you have selected not
all choices are available in the list.
- Clip Gray levels that exceed or fall below the specified gray value range
are automatically set to the lowest/highest gray value (black or white).
The effect corresponds to underexposure or overexposure. This means
that in some cases information is lost.
- Wrap If the result is greater than the maximum gray value of the image, the
value maximum gray value +1 is subtracted from it.
- Shift Normalizes the output to the value gray value + max. gray value/2.
Count Here you set the number of repetitions. I.e. the number of times the
function is applied sequentially to the respective result of the filtering.
The effect is increased correspondingly.
Kernel Size You can set the filter size in the x-, y- and z-direction, symmetrically
around the subject pixel. This should be the size of the transition re-
gion between objects and background match.
6.2.1.4 Laplace
This method is an edge filter, which calculates the variance of each pixel with its neighboring pix-
els by the lateral filter size.
Parameter
Kernel Size in X/Y
Here you set the matrix size in XY symmetrically around the pixel. This determines the degree of
smoothing effect in the X/Y direction.
6.2.1.6 Roberts
This method calculates a gradient image using the Roberts filter matrix. Large gray value differ-
ences between neighbors are shown as light gray values. No changes are indicated by a value of 0
(black). Edges are thinner than with the Sobel method.
6.2.1.7 Sobel
6.2.2 Arithmetics
6.2.2.1 Add
This function adds the two images Input1 and Input2 pixel by pixel and generates the Output
image. Note that a resulting gray value may be greater than the maximum gray value of the im-
age.
Parameter
Parameter Description
Normalization Depending on the IP function you have selected not all choices are
available in the list.
- Clip Automatically sets the gray levels that exceed or fall below the speci-
fied gray value range to the lowest/highest gray value (black or
white). The effect corresponds to underexposure or overexposure.
This means that in some cases information is lost.
- Shift Normalizes the output to the value gray value + max. gray value/2.
- Wrap If the result is greater than the maximum gray value of the image, the
value maximum gray value +1 is subtracted from it.
This function adds the factor Addend to each pixel of the Input image and generates the Output
image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
Parameter
Parameter Description
Addend Here you adjust the addend.
6.2.2.3 Average
The function calculates the average of the two images Input1 and Input2 pixel by pixel and gen-
erates the Output image.
6.2.2.4 Combine
This function calculates the linear combination of two images on a pixel basis.
Both Input images are first multiplied by the specified factor and then added together. The
brightness of the Output image can then be adjusted. The combination of two images can be
used to reduce noise, for example. This is achieved by acquiring several images of the same scene
and subsequently combining them.
Parameters
Parameter Description
Factor 1 Weighting factor for input image 1.
Value range: -1,00 ... +1,00
6.2.2.5 Divide
This function divides the images Input1 by Input2 pixel by pixel and generates the Output im-
age.
Parameter
Parameter Description
Factor Here you adjust the scaling factor by which the result of the division is
multiplied. Using this factor it is possible to keep the gray values of
the output image within the range of 0 to the maximum gray value.
Values that are greater than the maximum gray value are in any case
limited to the maximum gray value. Negative values are set to 0.
Value range: -20.000 ... +20.000
6.2.2.6 Exponential
This function calculates the exponential function of the Input image pixel by pixel and generates
the Output image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
6.2.2.7 Invert
This function additively inverts the gray values of the input image into the output image. Bright
pixels will become darker and vice versa. To adjust the output range the parameter Operand is
used. The actual mathematical operation is then: output-gray value = constant - input-gray value.
Negative results are clipped to 0 and overflow results are clipped to the maximum possible gray
value.
6.2.2.8 Logarithm
This function calculates the logarithm of the Input image pixel by pixel and generates the Output
image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
6.2.2.9 Maximum
The function calculates the maximum values of the two images Input1 and Input2 pixel by pixel.
6.2.2.10 Minimum
The function calculates the minimum values of the two images Input1 and Input2 pixel by pixel.
6.2.2.11 Multiply
This function multiplies the two images Input1 and Input2 pixel by pixel and generates the Out-
put image.
Parameter
Parameter Description
Factor Here you adjust the scaling factor by which the result of the multipli-
cation is divided. Using this factor it is possible to keep the gray values
of the Output image within the range of 0 to the maximum gray
value. Values that are greater than the maximum gray value are in any
case limited to the maximum gray value. Negative values are set to 0.
Value range: -20,000 ... +20,000
This function multiplies each pixel of the Input image with an adjustable Factor and generates
the Output image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
Parameter
Parameter Description
Factor Here you adjust the factor to be multiplied.
6.2.2.13 Reciprocal
This function computes the reciprocals of the gray values in the input image into the output im-
age. Bright pixels will become darker and vice versa. To adjust the output range the parameter
"factor" is used. The actual mathematical operation is then: output-gray value = factor/input-gray
value. Negative results are clipped to 0 and overflow results are clipped to the maximum possible
gray value.
6.2.2.14 Square
This function calculates the square of the Input image pixel by pixel and generates the Output
image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
This function calculates the square root of the Input image pixel by pixel and generates the Out-
put image.
To work useful with this function, the pixel values of the input image must be in float format. If
images of type 8 bit, 16 bit, 24 bit and 48 bit are used as input image, the output images are of
the same type, i.e. the output value will be clipped to an integer value. Negative values are set to
0, values higher than the maximum gray value are set to the maximum possible gray value of the
image.
6.2.2.16 Subtract
This function subtracts the two images Input1 and Input2 pixel by pixel and generates the Out-
put image. Note that a resulting gray value may be less than 0.
Parameter
Parameter Description
Normalization Depending on the IP function you have selected not all choices are
available in the list.
- Clip Automatically sets the gray levels that exceed or fall below the speci-
fied gray value range to the lowest/highest gray value (black or
white). The effect corresponds to underexposure or overexposure.
This means that in some cases information is lost.
- Shift Normalizes the output to the value gray value + max. gray value/2.
- Wrap If the result is greater than the maximum gray value of the image, the
value maximum gray value +1 is subtracted from it.
6.2.3 Binary
6.2.3.1 And
This method performs a bit-by-bit AND calculation for the input images (Input1 and Input2). This
function is particularly useful for the masking of images. All the pixels that are white in input im-
age 1 AND input image 2 are set to white in the resulting image. Pixels that are white in only one
of the two input images become black.
This tool enables you to isolate features in an image and to suppress image areas not of interest
using a mask image.
Parameter Description
Input The input image from which you wish to isolate features or suppress
areas not of interest.
Parameter Description
Mask The mask image that is applied to the input image.
The mask is laid on top of the input image. Image regions of the input
image in1 where the mask is white remain unchanged, image regions
where the mask is black are blacked out and suppressed.
Both images are aligned at the upper left corner. If the mask image is
smaller than the input image in1, the mask is applied only to part of
the input image, beginning at the upper left corner. The rest of the in-
put image remains unchanged.
6.2.3.3 Distance
This method creates a distance-transformed image (distance map, distance image) from a binary
image. The Euclidean distance to the next background pixel (gray value 0) is calculated for each
pixel within the white regions of the binary image (input image), and coded as a gray value. Bright
pixels (high gray values) indicate a long distance to the background.
6.2.3.4 Exoskeleton
This method generates an image with the skeleton of the influence zone of regions. The back-
ground in the Input image is analyzed, and the skeleton of the influence zones of the objects is
determined. This is then saved as a binary image in the Output image.
This method fills holes in regions. Holes are structures that have the gray scale value 0, and is
completely surrounded by pixels with a gray value equal to 0. Of regions outside of the image, it
is assumed that they are black. Black areas that touch the edge of the image are preserved, there-
fore, even if they are surrounded by a contour.
Parameters
Parameter Description
Label Background Activated: Assigns gray values to the background objects with con-
nectivity 4.
Deactivated: Assigns gray values to the background objects with
Connectivity 8.
This function marks binary regions of the input image. For each region in the input image, a check
is performed to establish whether a pixel has been set in the marker image.
Parameters
Parameter Description
Select Marked Activated: Copies the marked region into the output image.
Deactivated: Copies the unmarked region into the output image.
6.2.3.8 Not
This function performs a binary "not" operation on all bits of the binary representation of an input
pixel's gray value. A 0-bit in the input pixel results in an 1-bit in the corresponding output pixel, an
1-bit in the input gets a 0-bit in the output. For integral image types, the resulting output gray
value is the difference of the maximum possible gray value minus the input gray value, but for
float image type the results are strange due to the inhomogeneous float format.
6.2.3.9 Or
This method performs a bit-by-bit OR calculation for the Input1 and Input2 images. This function
can be used to combine binary masks or regions. All the pixels that are white in input image 1 OR
input image 2 are set to white in the resulting image. This means that all the white pixels in the
two input images are white in the resulting image.
6.2.3.10 Scrap
Parameters
Parameter Description
Minimum Area Here you adjust the minimum area in Px.
Select in Range
6.2.3.11 Separation
Using this function you can attempt to separate objects that are touching (and that you have
been unable to separate using segmentation) automatically.
Parameters
Parameter Description
Separation Mode
- Morphology This method separates objects by first reducing and then enlarging
them, making sure that once objects have been separated they do not
merge together again.
- Watersheds With this method you can separate objects that are roughly the same
shape. This method may however result in the splitting of elongated
objects.
Parameter Description
Count Enter how often the method is applied successively to the result at the
location of the separation, using the slider or input field.
6.2.3.12 Thinning
Parameters
Parameter Description
Thinning Element Select the desired thinning method here.
Converge If activated, the function is automatically repeated until all regions will
be deleted by the next erosion step.
AxioVision Com- Performs the function exactly like in AxioVision to achieve identical re-
patibility sults.
This function works in the same way as normal erosion. Structures in the input image are re-
duced. Thin connections between regions are separated. The difference between this function
and normal erosion is that structures are eroded until they would be deleted by the next erosion
step. With erosion, the pixel in question is set to the gray value 0 (black) in the resulting image.
For regions (pixels) at the image edge, the assumption is that the pixels outside the image are
white.
Parameters
Parameter Description
Structure Element Here you select the preferred direction of morphological change (e.g.
Cross, Diagonal).
Count Here you set the number of repetitions. This means that the function
is applied a number of times in succession to the filtering result. This
increases the effect accordingly.
Converge If activated, the function is automatically repeated until all regions will
be deleted by the next erosion step.
6.2.3.14 Xor
This method performs a bit-by-bit Xor calculation for the Input1 and Input2 images. This func-
tion can be used to combine binary masks or regions. All the pixels that are white in input image
1 or input image 2 are set to white in the resulting image. Pixels that are white in both input im-
ages are set to black.
6.2.4 Morphology
Morphology functions apply structure elements to images. A structure element is like a stencil
with holes. When the stencil is placed on an image then only some pixels are visible through the
holes. The gray values of these pixels are collected and their external gray value (minimum or
maximum) is computed.
This external gray value is assigned to that pixel of the resulting image which corresponds to the
place of the origin of the stencil on the input image. When the stencil is placed at all positions of
the input image, all pixels of the resulting image are thus assigned. When bigger structure ele-
ments are required than those which are provided, these can be achieved by iterating the small el-
ements using the Count parameter.
The following functions are available:
Function Description
Erode Shrinks bright structures on a darker background in the input image.
Thin connections between structures and small structures itself will
disappear.
Open First erodes (Erode class) the bright structures on a darker background
in the input image, then it dilates (Dilate class) the result by the same
number of steps. Thus it separates bright structures on a darker back-
ground, but approximately keeps the size of the structures.
Close First dilates (Dilate class) the bright structures on a darker background
in the input image, then it erodes (Erode class) the result by the same
number of steps. Thus it connects bright structures on a darker back-
ground, but approximately keeps the size of the structures.
Top Hat (White) Computes the difference between the original image and the image
produced by an open operation (Open class). Bright structures which
were flattened by the opening are strengthened in the result. This is
like putting a top hat with the size of the open operation upon the
structure and keep only the part inside the hat.
Top Hat (Black) Computes the difference between the original image and the image
produced by a close operation (Close class). Dark structures which
were flattened by the closing are strengthened in the result. This is
like lifting a top hat with the size of the close operation beneath the
structure from the dark side and keep only the part inside the hat.
Gradient Computes the difference between the dilated image and the eroded
image (Dilate and Erode class). Since a point in the dilated image has
the maximum gray value and the corresponding point in the eroded
Function Description
image has the minimum gray value within the structure element the
difference is zero for regions of constant gray values and gets bigger
for steeper gray value ramps or edges.
Grey Reconstruc- Works mainly as an iterated dilation (Dilate class) of the image, but
tion with a constraint image as a second input image. After every dilation
step the pixel wise minimum of the dilated image and the constraint
image is computed and gives the next image to be dilated. The com-
putation stops automatically when all the just dilated pixels are bigger
than the corresponding ones in the constraint image.
Parameters
Parameter Description
Structure Element Here you select the desired structure element. Following elements are
available: Horizontal, Diagonal 45°, Vertical, Diagonal 135°, Cross,
Square, Octagon.
Count Here you can adjust the number of repetitions to define the size of
the structure element.
Binary Only available for Erode, Dilate, Open and Close function.
Activated: Creates a binary image. The calculation will be faster.
6.2.5 Smooth
6.2.5.1 Rank
This method performs a rank order filtering. The gray levels of the resulting image is determined
by calculating the ranking within the matrix of the filter size in the X and Y directions. Even num-
bers are automatically set to the next odd number. A low value for the rank value enlarges dark
areas, a higher value will increase bright areas of the image.
6.2.6 Utilities
With this method a HLS image can be generated of the single color extractions H, L, S.
This function imposes an image with a defined noise for testing purposes.
Parameter Description
Signal to Noise Ra- Adjusts the signal to noise ratio.
tio Range 0.10 - 100.00.
Distribution
This method generates the individual color extractions for a HLS input image. The resulting images
for hue, lightness and saturation take the form of gray images.
6.3 Intellesis
This module enables you to use machine-learning algorithms for segmenting images using pixel-
classification. It uses different feature extractors to classify pixels inside an image based on the
training data and the labeling provided by the user. There are a variety of use cases because the
functionality itself is "data-agnostic", meaning it can be used basically with every kind of image
data.
The extension has the following main functionality:
§ Any user can intuitively perform image segmentation without advanced training by simply la-
beling what shall be segmented.
§ Import of any image format readable by the software, incl. CZI, OME-TIFF, TIFF, JPG, PNG
and TXM (special import required).
§ Creation of pre-defined image analysis settings (*.czias) using machine-learning based seg-
mentation that can be used inside the ZEN measurement framework.
§ Integration of the Trainable Segmentation processing function within the OAD environment.
Application XRM (X-Ray Microscopy) image from sandstone showing the main steps when working with the
Example: Intellesis Trainable Segmentation module.
6.3.2 FAQ/Terminology
Question/Term Description
Machine Learning The Intellesis Trainable Segmentation module uses machine learn-
ing to automatically identify objects within an image according to a
pre-defined set of rules (the model). This enables any microscopy user
Question/Term Description
to perform image segmentation even on complex data sets without
programming experience or advanced knowledge on how to set up
an image segmentation.
What is a "Model" ? A model is a collection of rules according to which the software at-
tributes the pixels to a class. Such a class is mutually exclusive for a
given pixel, i.e. a pixel can only belong to one class. The model is the
result of (repeated) labeling and training a subset of the data. After
the model is trained (the labels provided by the user were used to
"train" the classifier), it can be applied to the full data set in image
processing, or it can be used to create an image analysis setting
(*.czias) to be used with the ZEN image analysis module.
In image processing the trained model can be applied to an image or
data set and perform segmentation automatically. As result you will
get two images, the segmented image on the one hand and a confi-
dence map on the other.
What is a "Class" ? A class is a group of objects (consisting of individual pixels) with simi-
lar features. According to the selected model the pixels of the image
will be attributed as belonging to a certain class, e.g. cell nuclei, inclu-
sions in metals, etc.
Every model has by default already two classes built-in, because at
least two classes are needed (e.g. cells and background or steel and
inclusions). Of course, more classes can be defined if necessary.
What is "Labeling" ? Instead of using a series of complex image processing steps in order
to extract the features of the image, you just need to label some ob-
jects in the image that belong to the same class. Based on this manual
labeling the software will attribute the pixels of the image as belong-
ing to a certain class. In order to refine the result, you can re-label
wrongly attributed pixels to assign them to another class.
What is "Training" ? During the training process (within the training user interface) you can
repeatedly label structures as belonging to one class, run the training,
check if the result matches your expectation and if necessary, refine
the labeling in order to improve the result. The result is a trained
model (a set of rules) which produce the desired result when applied
to the training data.
With the labeled pixels and their classes a classifier will be trained. The
classifier will then try to automatically assign single pixels to classes.
Training UI The user interface for training is the starting point of the automatic
(User Interface) image segmentation process. Here you import images, label, and train
the model which you can later use for automatic image segmentation.
Within this interface you can load the training data, define the classes
of objects found in your data and train the classifier to assign the ob-
jects to the correct classes.
What is "Segment- In general segmentation is the combination of pixels of the same class
ing" or "Segmenta- within an image. Before you can perform segmentation, the segmen-
tion"? tation model has to be trained. Within the Training UI you train the
software by labeling specific objects or structures that belong to dif-
ferent classes. A pseudo-segmentation is performed each time you
train the model so that you see if the feature extractor works for your
image.
Question/Term Description
One output of the Intellesis Trainable Segmentation processing
function is the fully segmented image or data set using the trained
model. The second output is the confidence map.
Confidence Map The confidence map is one of two resulting images when you apply a
trained model to an image by using the processing function Intellesis
Trainable Segmentation.
The (resulting) grayscale image encodes the reliability of the segmen-
tation. Areas which can be addressed to a certain class with a high
confidence will appear bright, whereas areas which have a lower con-
fidence to belong to a certain class will appear dark. The confidence is
represented by a percentage value, where 0 means "Not confident at
all" (dark) and 100 "Very confident" (bright).
Prediction When the model that was trained on example data is applied to a
new unlabeled data set the result is called a prediction.
The Training User Interface which is accessed via the Intellesis Train-
able Segmentation tool on Analysis tab. Within the training user inter-
face you can label the images to be used as input for training a specific
model, see User Interface - Training [} 489].
Training
ZEN Intellesis Trainable Segmentation offers three main workflows. The general workflows
and the basic steps involved are shown inside the diagram.
The Training user interface is accessed via the Analysis tab. Open the Intellesis Trainable Seg-
mentation tool, select or create a new a model and click on Start Image Training. The user in-
terface for training will be visible:
1 2 3
3 Image Gallery
On the right side you can import and select the images you want to use for training and
segmenting.
4 Labeling Options
Below the center screen area you can adjust the Labeling Mode or Brush Size.
Info
When you use images with large X/Y dimensions, e.g. large tile images, the segmentation will
be only performed on a subset of the whole image in order to avoid long waiting periods. The
current image subset maximum size in X/Y is 5000 pixels and is centered on the current view
port. Nevertheless, all labels inside the complete image will be used for training, but the seg-
mentation preview (pseudo-segmentation) will be only applied to that subset.
Selects the set of feature extractors used for segmentation. For more information, see Feature Ex-
tractors [} 506].
Parameter Description
Basic Features 25 A predefined feature set using 25 features. For more detailed informa-
tion, see Basic Features 25 [} 506].
Basic Features 33 A predefined feature set using 33 features. For more detailed informa-
tion, see Basic Features 33 [} 507].
Deep Features 50 The complete or reduced feature set from either the 1st, 2nd or 3rd
layer of a pre-trained network is used to extract the respective num-
Deep Features 64
ber of features. For more detailed information, see Intellesis Deep
Deep Features 70 Features [} 508].
Deep Features 128
Deep Features 256
§ There is no "right" selection. We recommend to always try different parameters for the same
image to see which one works best.
Parameter Description
No Postprocessing This parameter is set by default. No further postprocessing will be ap-
plied on the images.
Conditional Ran- If selected, this post processing function is applied to the output of
dom Field (CRF) the pixel classification. This can improve the segmentation results, de-
pending on your sample. The CRF algorithm tries to create smoother
and shaper borders between objects by re-classifying pixels based on
confidence levels in their neighborhood.
Note: If CRF is activated, the returned confidence map does not re-
flect the outcome of the majority votes of all decision trees of a spe-
cific class anymore. Therefore, a map containing only ones will be re-
turned when the CRF postprocessing option is activated.
Parameter Description
Undo/Redo When you click on the arrows you can undo/redo the last actions you
have performed.
Labeling Mode Here you can select between labeling and erase mode.
Brush Size Here you can set the brush size of the labeling/erasing tool.
Note that the brush size can be changed alternatively by pressing the
Ctrl key and using the mouse wheel (when the cursor is inside the im-
age area.)
All Labels When you click on Clear, all labels in the active image will be deleted.
In the right tool area under Images & Documents you find the area for handling the images to
be used for training. Here you can load and select the images you want to use for training. When
you click on a loaded image, the image will be visible in the Center Screen Area.
Parameter Description
Training Mode Here you can select the mode for training.
– Single Channel Only one channel of the image is imported for training.
Such models trained on one channel can only be used inside the Im-
age Analysis Wizard for the Intellesis Class Segmenter.
Select Channel Only visible if Single Channel is chosen as the training mode.
In the dropdown list you can choose the channel you want to import
for training.
Info
For a good training result always note the following:
4 The more accurate you perform the labeling the better the result will be. You can start
with a coarse labeling and then check the result for problematic areas where you should
refine the labeling.
4 Accurate labeling is generally preferred over "just labeling everything" roughly.
4 Take care to also label some areas which contain edges of objects and transitions between
two classes.
4 Really use an iterative approach: check the segmentation/training results before labeling
huge amounts of pixels.
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, click and se-
lect New.
2. Enter a name and a description for the new model and click
3. Click Start Training.
à The user interface for training opens.
4. In the Right Tool Area under Open Images click Import Images.
5. Select the image for training from the file system and click Open.
à The image is displayed in the list. Note that all imported images will be included in your
training model.
6. Select the image from the list.
à The image is displayed in the Center Screen Area. Note that on a later stage you can
add more images via Import Images to refine the training.
7. Go to the Left Tool Area and define the classes. Depending on your image and what you
want to segment, you can define a certain amount of classes. When you start with a new
model you always have two predefined classes Object and Background. If you click on
Add Class a new class is added. You can rename these classes by a double-click and enter-
ing a new name. Note that you must not use the name Root for one of your classes as this
a reserved keyword from the image analysis.
8. Move the courser inside the image and start labeling the areas which you want to assign to
the selected class. To label within the image simply hold down the left mouse key and
move the mouse.
9. After labeling a few areas with different classes, click Train & Segment.
à The software starts the training. The system tries to automatically recognize other areas
of the same classes. Depending on the image, the pixel classification can take a while.
When finished the image has the additional channel Seg (menation) containing the seg-
mentation preview.
10. If you are not satisfied with the result, you can label more details of the corresponding
classes. For this you can zoom into the image or change the brush size of the courser. The
more accurate you label the different classes within the image, the better the recognition
will be. When you finished the labeling, you have to click on Train & Segment again. You
can repeat that process until you are satisfied with the segmentation result.
Note that at this point as a result you will only see a pseudo segmented image and only the
area visible in the main window is segmented (max. area 5000x5000 px). The full segmen-
tation of an image/data set is performed on the Processing tab by using the trained model
within the Trainable Segmentation processing function.
Fig. 27: Trained model with refined labeling and pseudo segmentation results
2. In the file browser, select the model file or your network from the file system. The network
can also be imported by selecting the respective JSON file.
3. Click Open.
à The model will now be available in the dropdown list.
4. Select the model and click Start Training to work with the model, e.g. if you want to train
more details.
If you want to use the model for image processing switch to the Processing tab. In the
Trainable Segmentation processing function you can select the imported model and ap-
ply it to the desired images/data sets.
See also
2 Using a Trained Model for Image Processing [} 501]
2 Using a Trained Model for Image Analysis [} 502]
Prerequisite ü You have created and selected a model for advanced image segmentation.
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, click and se-
lect Export.
If you want to export the full model containing all images select Export with Images (see
also Exporting with Images [} 498]).
2. Select the file location and click Save.
The model with the trained segmenting routine is exported. Note that only the model file itself is
exported. Such a model is meant to be used for segmentation purposes or to create an image
analysis setting, but not for the training process.
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, click and se-
lect Export With Images.
à The file explorer opens.
2. Navigate to the folder where you want to store the training model and click Save.
The model with the trained segmenting routine and all images used for training are exported.
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, click on Options
and select Create Analysis Setting.
à The file browser opens with the default location for image analysis settings.
2. If you want to change the location, select a new path for saving the setting.
3. Click on Save.
You have now created and saved an image analysis setting.
For more information, see also Using a Trained Model for Image Analysis [} 502].
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, click Options
and select Delete.
à The Deleting Model dialog opens.
2. Click on Yes to confirm that you want to delete the model.
You have deleted the model. The model you worked with before is selected.
2. To change the color of a class, select the class and click on the colored rectangle next to
Color.
à You see the Color Selection dialog.
3. Select a new color from the list.
4. To change the opacity of the labels within the image, adjust the Opacity slider.
5. To rename a class double click on the class entry and enter a new name. Press Enter or
click the Save icon to save the new name. Note that you must not use the name Root for
one of your classes as this a reserved keyword from the image analysis.
6. To delete the selected class, click Delete Class.
With this class specific function you can import binary images from an external source as labels
for the current selected class. This is helpful when the "ground truth" for a specific image is avail-
able or when you want to use an image obtained by a different modality.
Info
Be aware that this function overwrites existing labels for this class and that this functionality
can possibly create a huge number of labels that might lead to memory issues depending on
the system configuration and the selected feature extractor.
Prerequisite ü The label image to be imported has exactly the same dimension in XY as the currently se-
lected training image.
ü You have opened the Intellesis Trainable Segmentation Wizard. For more information, see
Creating a New Model [} 493].
1. Right-click a class and select Import Labels from Binary Mask.
à The Explorer opens.
2. Navigate to the label image you want to import, and click Open.
The imported image is displayed in the Image view. The displayed labels have the color of the se-
lected class and fit exactly with the class of the loaded image.
With this function you can convert the result of a segmentation directly to labels and thereby in-
crease the number of labels for the next training step.
Prerequisite ü You have opened the Training user interface. For more information, see Creating a New
Model [} 493].
ü You have performed a segmentation.
1. Right-click a class and select Segmentation to Labels.
The segmentations are converted to labels.
In ZEN you can use pre-trained neural networks as models for image segmentation. You can use
networks provided by Zeiss or load your own networks. Each network has to be imported to ZEN
via the Intellesis Trainable Segmentation tool as a model. For more information, see Importing
Models [} 497].
After the import the network can be used as a normal segmentation model for the following
workflows:
§ Segment single channel images using the respective image processing function, see also Us-
ing a Trained Model for Image Processing [} 501].
§ Create an image analysis setting based on the network (no hierarchy), see also Using a
Trained Model for Image Analysis [} 502].
§ segment a specific class in the Automatic Segmentation [} 944] step of the Image Analysis
Wizard.
Prerequisite ü You have a trained model available for automatic image segmentation.
ü You have opened the image which you want to segment under Images & Documents.
1. On the Processing tab under Method open the group Intellesis Trainable Segmenta-
tion and select the Segmentation entry.
2. Open the Method Parameters and select the trained model from the Model list. Note
that the model must be trained on images with similar features otherwise the segmentation
will not work properly.
3. Select the desired Output Format.
If you select Multi-Channel, the result will be a multi-channel image, where every class
that was defined in the trained model will be in their own channel. This output format can
be easily viewed inside the ZEN 3D view and can be combined with the original image
data easily.
If you select Labels, you will get an image with one channel, where the pixels belonging to
the different classes will be labeled with different colors and will be represented by distinct
pixel values.
4. Under Input Parameters select the image which you want to segment. Note that it must
be already opened in the ZEN software, otherwise it will not be available in the list.
5. Click on Apply.
à The automatic image segmentation using the trained model is performed.
à After a short while you will get two resulting images, depending on the output format:
- the multi-channel or labels image and
- the confidence map.
Fig. 28: The image shows (from left to right): original image, segmented image, confidence
map
See also
2 Segmentation [} 508]
Once having trained a model for segmentation you can use it also in the Image Analysis wizard
of the software for further analysis. In order to use the trained model, you must first create a new
image analysis (IA) setting (*.CZIAS format) first.
5. You can now continue with setting up an image analysis. For more information about the
Image Analysis Wizard, see Image Analysis Wizard [} 941].
à Sandstone Dataset segmented using Intellesis inside the Image Analysis Wizard show-
ing the actual segmentation step. Instead of conventional thresholds, the classifier will be
used to identify pixels.
à It is possible to allow only for pixel above a certain classification confidence (valid for all
classes) using the Min. Confidence (%) parameters.
à The binary functions Fill Holes and Separate will be only applied on the resulting binary
masks from the classification and are therefore independent from the actual classification
process.
à Sandstone Dataset segmented using Intellesis inside the Image Analysis Wizard show-
ing the measurement results for one particular class (shown in green).
à You can also use the IP function inside the ZEN Blue Batch Tool similar to all the other
functions to segment several images using different models at once in one run.
Info
Undo Border Size Changes
There is no way to undo the change of the border size unless you remember the original value
and change it back with the same workflow described here.
Prerequisite ü You have imported a neural network, see Importing Models [} 497].
1. On the Analysis tab, in the Intellesis Trainable Segmentation tool, select the network
as your Model.
2. Click and select Change Border Size.
à The Change Border Size dialog opens.
3. Change the Border Size to fit your need. Note while increasing the border size reduces
segmentation artifacts in the output, it also decreases the tiling speed.
4. Click OK.
You have changed the border size for tiling. If there are still tiling artifacts with the maximum bor-
der size, consider retraining the model with larger tiles.
See also
2 Tile Border Size Example [} 510]
The Intellesis Trainable Segmentation module allows to use the Trainable Segmentation
processing function within the ZEN Open Application Development (OAD) environment.
Method/Command Description
Zen.Processing.Segmentation.TrainableSegmen- Function to segment an image using a
tation trained model.
(Input, Model, Output Format) The output result is an image.
Method/Command Description
§ Model ModelName - Defines the name of the
model.
§ Segmentation performance in general depends among other factors on the system perfor-
mance, the available and free RAM and GPU memory.
§ Whenever using ZEN Intellesis Trainable Segmentation it is strongly recommend not
to use other memory- or GPU-intensive applications at the same time.
§ Deep Feature Extraction uses the GPU (NVIDIA only) if present on the system. It is recom-
mended to use an GPU with at least 8GB of RAM.
§ When installing the GPU libraries it is required to use the latest drivers which can be obtained
from the NVIDIA homepage (https://www.nvidia.com/Download/index.aspx?lang=en-us).
§ In case of using an approved ZEISS workstation, the latest drivers can be found on the in-
staller.
§ When using Deep Feature Extractor on a GPU system, Tensorflow will occupy only as much as
GPU RAM as needed to ensure system stability. When the segmentation is finished this GPU
memory is released automatically (with the current version).
§ Therefore, when starting another GPU-intensive application, for example GPU-DCV, the GPU
memory cannot be used by this new process and a CPU fallback will be used or performance
issues may occur.
§ In this case, restart ZEN to free all possible GPU memory and then start using GPU-DCV (or
similar applications).
§ For calculating the features various filters with various filter sizes and parameters are applied
to the region around this pixel (2D Kernels).
§ Results are concatenated and yield the final feature vector describing the pixel.
Used Filters:
Used Filters:
Parameter Description
Threshold slider Adjusts the minimum confidence level in %.
6.3.17.2 Segmentation
Using the Segmentation image processing function you can apply a trained segmentation model
to an image/data set.
Parameter Description
Model Select the trained model here.
Parameter Description
Output Format When applying the Segmentation processing function to an image
you will always get two output images. The processed image and the
confidence map.
The following output formats for the processed image are available.
See also
2 Using a Trained Model for Image Processing [} 501]
6.3.17.3 Utilities
Here you can convert an output image generated with the Segmentation IP function according
to your needs:
Parameter Description
Labels to Channels Converts the resulting image with the output format "Labels" to a
multi-channel image.
Channels to Labels Converts the resulting image with the output format "Multi-Chan-
nel" to an image containing a single channel image with labels.
Under Parameters you can additionally adjust the Unlabeled Pixel
Value and the Output Pixel Type (8 Bit B/W or 16 Bit B/W).
Parameter Description
Model Shows the selected model. If you have several models available, you
can select the corresponding model from the drop-down list.
Options
– New Creates a new, empty model, see Creating a New Model [} 493].
– Create Analy- Creates and stores a *.czias file in the specific folder for image analysis
sis Setting settings, see Creating Analysis Setting [} 498]. The file can then be
used in the Analysis Wizard.
– Import Imports a model to the ZEN software, see Importing Models [} 497].
– Export Exports the model to the file system, see Exporting a Model [} 497].
Parameter Description
– Export With Exports the model including all images to the file system, see Export-
Images ing with Images [} 498].
Start Training Opens the Training UI, see User Interface - Training [} 489]
Parameter Description
Total Tile Width Displays the total tile width used by the network.
Total Tile Height Displays the total tile height used by the network.
Border Size Sets the border size of the tiles. The lower limit of the border size is
zero and the upper limit is a quarter of the smallest dimension of the
tile.
Tile Overlap Displays the tile overlap which is the is the sum of the overlap on the
left and right side, see Tile Border Size Example [} 510]. It is updated
according to changes of the border size.
See also
2 Changing the Tile Border Size for Neural Networks [} 504]
The tile overlaps in % is the is the sum of the overlap on the left and right side. Consider the fol-
lowing two examples as illustration for the overlap:
This module allows you to train and use deep learning based models for the denoising of images.
This method can be applied to any type on image and any dimensions and it is not dedicated to a
special field of application. It should be used before applying processing functions that modify the
pixel values to that image.
Denoising is an operation to reduce noise in an image, in case of ZEN Intellesis Denoising with the
help of deep learning methods. In general, there are different ways to train a denoising model. In-
tellesis Denoising uses the approach called Noise2Void (N2V), which requires only a noisy input
image for the training of a model and can thus be trained directly on the data that should be de-
noised. To give a simplistic explanation, this N2V method replaces pixels by masked pixel data
randomly selected within a certain window/surrounding area. With this approach, the model is
then trained to reconstruct the original pixels and to discard the implicit noise in the image. For
detailed information on Noise2Void , see the paper "Noise2Void - Learning Denoising from Single
Noisy Images" written by Alexander Krull, Tim-Oliver Buchhol and Florian Jug, see also https://
arxiv.org/abs/1811.10980.
1 2 3
1 Denoising Parameters
Here you have the parameters to configure your denoising training, see Denoising Pa-
rameter Section [} 512].
2 Input Image
Displays the image that is selected in the Image Gallery.
3 Prediction Image
Displays the prediction for the image with the current settings made in the Denoising
Parameters section.
4 Image Gallery
Here you can import and select the images you want to use for training the denoising. A
click on an image opens it the left image container in the image view.
5 View Options
Here you have some general view options on the Dimensions and Display tab to adapt
the image display.
Info
Advanced Parameters
The parameters Batch Size, Window Size and Masking Ratio are three advanced parame-
ters to adjust the NOISE2VOID (N2V) based training. For general information on denoising and
N2V, see Intellesis Denoising [} 511].
Parameter Description
Model Displays the name of the current model.
Parameter Description
Number of Epochs Defines the number of times that the model is trained with the im-
ages.
The "number of epochs" refers to the number of times the training
dataset is passed through the neural network during the training
process. Each pass through the dataset is called an epoch (an epoch
consists of a fixed number of samples drawn from the dataset). In
N2V, the number of epochs is a user-defined parameter that deter-
mines how many times the neural network will be trained on the full
dataset. If the number of epochs is too short, the model may not be
able to learn the underlying patterns in the data, resulting in underfit-
ting. On the other hand, if the number of epochs is too long, the
model may start to memorize the training data, resulting in overfit-
ting, which makes it less effective at denoising new images.
Parameter Description
work to learn a more robust representation of the underlying image
structure, rather than simply memorizing the specific noise patterns in
the input image. The masking ratio controls the level of noise injec-
tion during training, with higher values leading to more aggressive
masking and generally better performance on noisy datasets. If the
masking ratio is too small, the model may not be able to learn the
noise distribution effectively, which can lead to poor denoising perfor-
mance. In general, the optimal values for these parameters will de-
pend on the specific characteristics of the input image and the
amount and type of noise present in the data. It is often necessary to
experiment with different values to determine the best combination
for a given task.
Train and Denoise Trains the model based on the current parameter settings.
Finish Saves the model and all changes and closes the wizard.
See also
2 Intellesis Denoising [} 511]
Prerequisite ü If required, you have pre-processed your image(s) with the Whitening function, see Whiten-
ing [} 181].
1. On the Analysis tab, in the Intellesis Denoising tool, click and select New.
2. Enter a name for the new model and click . Alternatively, enter the name and press
Enter.
à A new setting with the name is created and automatically selected as Model.
3. Click Start Training.
à The user interface for training opens.
4. In the Images and Documents section on the right, click Import Images.
à A file browser opens.
5. Select the image(s) you want to use for classification and click Open.
à For images with multiple channels, a dialog opens to select the channel for denoising.
6. Select the channel you want to use for denoising and click OK.
à The image is displayed in the list in the Images and Documents section. Note that all
imported images are included to your model.
7. Select the image from the list.
8. In the parameter section, set the Number of Epochs and adapt the more advanced pa-
rameters, if necessary.
9. Click Train and Denoise.
à Your model is trained based on the settings and the prediction is displayed in the right
image container.
10. If you are satisfied with the result, click Finish.
à All changes are saved and the wizard is closed.
You have successfully created and trained a model for denoising. You can now use it to denoise
your images with the Intellesis Denoising image processing function, see Using a Trained
Model for Denoising [} 516], or use it during a continuous acquisition, see Using Denoising Dur-
ing Continuous Acquisition [} 62]..
Prerequisite ü You have a trained model available which you want to import.
ü You are on the Analysis tab.
The model is exported. It contains the trained routine to denoise an image and is not intended for
the training process anymore.
Parameter Description
Model Displays the selected model. If you have several models available, you
can select the corresponding model from the dropdown list.
Options
– Export Exports the selected model to the file system, see Exporting an In-
tellesis Denoising Model [} 515].
See also
2 Intellesis Denoising [} 511]
2 Creating and Training an Intellesis Denoising Model [} 514]
6.4.7 IP Function
Info
Use in Direct Processing
If you are using this function in Direct Processing, the denoising model has to be placed/avail-
able on the processing workstation. Additionally note that the feature to automatically select a
remote processing PC is not possible when Intellesis Denoising is used in Direct Processing.
Parameter Description
GPU Only available if you have several CUDA capable GPUs. If you have
only one, the selection is fixed.
Selects the GPU used for running the denoising model. Running the
model on several GPUs in parallel is currently not possible.
See also
2 Using Direct Processing [} 230]
2 Intellesis Denoising [} 511]
2 Using a Trained Model for Denoising [} 516]
This module offers the functionality to classify objects based on measured parameters of an ana-
lyzed image using machine-learning algorithms and to create and to train such a model for object
classification. Since the input for the object classification is an analyzed image containing a result
table, the functionality of the Image Analysis module is also required for the complete workflow.
For the object classification functionality in ZEN there are three main workflows:
1 2 3
1 Classification Settings
Here you can set the classes for labeling/ classifying the objects. For more information,
see Classification Settings [} 519].
2 Image View
Here you have your analyzed image and can label the objects by clicking on them.
3 Table
This table displays your analysis results and can be used for labeling the objects in your
image. For additional information, see Object Classification Table [} 520].
4 Image Gallery
Here you can import and select the images you want to use for classification. For more
information, see Image Gallery (Object Classification) [} 519].
5 View Options
Here you have some general view options of the Dimensions Tab [} 1029] and Display
Tab [} 1043] as well as specific labeling options on the Labeling Options Tab [} 520].
See also
2 Creating and Training an Object Classification Model [} 521]
Parameter Description
Model Displays the model name.
Classes list Displays all the classes for classifying the objects. The color of the
class can be changed by clicking on the color field. Renaming of a
class is possible by right click menu.
Delete Class Deletes the currently selected class from the list.
Train & Classify Starts the training for the object classification setting.
Display
See also
2 Creating and Training an Object Classification Model [} 521]
On the right under Images & Documents, you find the area for handling the images. Here you
can load and select the images you want to use for classification. When you click on a loaded im-
age, the image is displayed in the Center Screen Area.
In the list of images you have certain possibilities to gain advanced information about the image.
If you load a new image, only the preview image, file name and type of image are displayed.
When you have labeled certain slices (for example), they are listed under the image entry. If you
click on this information the corresponding image (slice) is automatically displayed in the center
screen area. This is very helpful when you are working with large data sets such as z-stacks,
scenes or time-series and you want to quickly load the image which you have already labeled.
Parameter Description
Import Images Opens the file browser to select the images for import.
Parameter Description
Mode
– Activates the selection mode. You can select objects in the image
without labeling them.
Selection
– Activates the labeling mode. You can label the objects in your image.
Labeling
– Activates the erasing mode. You can erase the labels in your image.
Erase
Object Display Sets how the label are displayed in the image.
Here the analyzed objects of the image are displayed. You can also use the table to label the ob-
jects, see Creating and Training an Object Classification Model [} 521]. This table is linked to
the image, which means selecting a row in the table centers the view on the respective object in
the image and vice versa. If the image is not zoomed to extent, a click on an object in the table
centers this object in the image view and adapts the zoom level if necessary (by zooming out). The
table can be sorted by each column by simply clicking on the header.
The table only displays the measurements features selected inside the image analysis, not the
ones that are used internally to classify the objects.
Parameter Description
ID Displays the unique ID of the objects in the analyzed image.
Prediction Displays the predicted label using all labels from all available images.
Confidence The confidence value in % for the predicted label of every individual
object.
Parameter Description
Measurement fea- Additionally to ID and Label, the values of the various Measurement
tures Features [} 440] are displayed in the table.
Prerequisite ü You have an analyzed image. For detailed information on analysis, see the chapter Image
Analysis [} 402].
1. On the Analysis tab, in the Trainable Object Classification tool, click and select
New.
2. Enter a name for the new model and click . Alternatively, enter the name and press
Enter.
à A new setting with the name is created and automatically selected as Model.
3. Click Start Training.
à The User Interface [} 518] opens.
4. In the Image Gallery on the right, click Import Images.
à A file browser opens.
5. Select the image(s) you want to use for classification and click Open.
à The Select Region Class and Channel dialog opens.
6. Select the Region Class and the channels you want to use for classification and click OK.
à The image(s) is/are imported into the wizard.
7. In the Open Images tool on the right, click on the image you want to label.
à The image is displayed and the table shows the data of the analysis result.
8. On the left side, add the classes you need for you object classification by clicking Add
Class.
à You have created the classes that you want to distinguish.
9. To change the label color for a class, click on the color field of the list entry and select one
from the window.
10. To change the name of a class, right click on the entry, select Rename, enter a new name
and click . Alternatively, double click the name entry, enter a new one and click
, or press Enter.
11. In the Labeling Options tab, click .
à You are now in labeling mode.
12. In the classes list, select a class and click on an object that belong to this class in the image
or in the table.
13. Repeat the labeling for the objects of the different classes you created.
14. Click Train & Classify.
à Your model is trained based on the labeling and a prediction is displayed.
15. If you are not satisfied with the result, you can label more objects.
16. If you are satisfied with the result, click Finish.
à All changes are saved, and the wizard is closed.
You have successfully created and trained a model for object classification. You can now use it to
classify objects in your analyzed images with the processing function Object Classification
[} 525]. For more information, see also Using a Trained Model for Object Classification [} 523].
Prerequisite ü You have a trained model available which you want to import.
ü You are on the Analysis tab.
Prerequisite ü You have created and selected a model for object classification.
1. On the Analysis tab, in the Trainable Object Classification tool, select the model you
want to export in the dropdown.
2. Click and select Export.
à A file browser opens.
3. Select the file location and click Save.
The model with the trained classification routine is exported. Such a model is meant to be used
for classifying objects in an analyzed image, but not for the training process.
NOTICE
Necessary retraining of models
In case of changes in Python libraries, trained models can stop working on a new version of
ZEN and need to be retrained first. Retraining is only possible if the model contains the images,
or the images are generally still available. To be able to retrain your model, consider the fol-
lowing solutions:
4 Export your model with your images (see Exporting an Object Classification Model with
Images [} 523]).
4 Make a backup of the images used for training the model, e.g. on your (external) hard
drive.
1. On the Analysis tab, in the Trainable Object Classification tool, click and select
Export With Images.
à The file explorer opens.
2. Navigate to the folder where you want to store the training model and click Save.
You have exported your model with the trained classification routine as well as the images that
were used for training. Such a model is meant to be used for classifying objects in an analyzed im-
age, but not for the training process anymore.
Prerequisite ü You have created and trained an object classification model. For more information, see Creat-
ing and Training an Object Classification Model [} 521].
ü You have opened the analyzed image(s) for which you want to use the classification model.
1. On the Processing tab, under Trainable Object Classification, select the function Object
Classification.
2. Make sure that in the Input tool the correct image is selected.
3. In the Parameter tool, select the Model you want to use for object classification.
4. Click Apply.
The objects in your image are now classified based on the trained model. To view the result (ta-
ble) you can switch to the Analysis View [} 429]. There you can see all measurement values in the
table and adapt the chart to display your object classification results with the Custom Chart tab
[} 432] as well as select your objects in the Analysis Tab [} 430].
Parameter Description
Model Displays the selected model. If you have several models available, you
can select the corresponding model from the dropdown list.
Options
– New Creates a new, empty model, see Creating and Training an Object
Classification Model [} 521]
– Import Imports a model to the ZEN software, see Importing an Object Clas-
sification Model [} 522].
– Export Exports the model to the file system, see Exporting an Object Classi-
fication Model [} 523].
– Export With Exports the model including all images to the file system, see Export-
Images ing an Object Classification Model with Images [} 523].
Parameter Description
Description Displays and sets a description for the selected model.
6.5.9 IP Function
With this image processing function you can classify the objects of your analyzed image based on
a trained model.
Parameter Description
Model Selects the objects classification model.
Append Classifica- Activated: Displays all statistical features in the result table.
tion Features to Deactivated: Displays only the features that were selected in the im-
Result Table age analysis.
See also
2 Intellesis Object Classification [} 517]
2 Creating and Training an Object Classification Model [} 521]
2 Using a Trained Model for Object Classification [} 523]
This module offers functionality to analyze and quantify 3D image data. The image analysis wiz-
ard, optimized for 3D functionality, guides you through the segmentation steps to setup a fast
and easy-to-use quantification of segmented objects. The resulting objects are visualized using the
3D viewer and with the help of a table, the objects can be sorted and exported. You need a valid
license that includes the 3Dxl and 3D Image Analysis functionality.
1. On the Analysis tab, in the Image Analysis tool, click and select New from the
dropdown.
2. In the Settings field, enter a name for your image analysis setting and click . Alterna-
tively, enter the name and press Enter.
à A new image analysis setting is created and saved.
3. For Method, click .
à The Segmentation Method Selection dialog opens.
4. From the Method drop down menu, select 3D Segmentation and click OK.
à A dialog is displayed to ask if you want to overwrite existing settings to create the 3D
setting.
5. Click Yes.
6. In the Image Analysis tool, click Edit Image Analysis Setting.
à The Image Analysis Wizard opens in the Classes step with a default class already cre-
ated.
7. If you want to extend the predefined list of classes, click Add Class to add a new class or
Add Subclass to add a subclass to the currently selected class.
8. If you want to change the name for the classes, select a class and change the text in the
Name field.
9. Select your class(es) and define the Channel you want to use for segmentation and define
a Color for the segmentation mask.
10. Click Next.
à The Frame step opens.
11. If you only want to analyze a specific volume of the image, you can use the parameters in
this step to create one or more measurement frames. For a description of the parameters,
see Frame [} 942].
12. Click Next.
à The Automatic Segmentation step opens.
13. Select one of your classes and click Select.
à The Class Segmentation Method dialog opens.
14. For Method, select the segmentation method you want to use for your current class and
click OK.
à The dialog closes and the displayed parameters are updated according to the selected
method. Parameters that work only on the 2D slices of the volume are grouped under
2D Parameters, whereas parameters that work on the 3D dataset are grouped under
3D Parameters. For a description of the parameters, see Automatic Segmentation
[} 944].
à Additionally, the section for 3D Preview is displayed.
15. Set the parameters for the segmentation of the currently selected class according to your
needs.
16. Repeat the previous three steps to select a segmentation method and set the parameters
for each of your classes.
17. To see your image in 3D, change to the 3D view in the center view of the wizard.
18. Use the parameters in the 3D Preview section to generate three dimensional slices around
the current z position.
19. Click Next.
à The Region Filter step opens.
20. If you want to use conditions to filter out unwanted segmented regions, you can use the
parameters of this step. For a description of the parameters, see Region Filter [} 954].
21. Click Next.
à The Features step opens.
22. Select one of your classes and, under Features of Individual Regions, click Select.
à The dialog for measurement feature selection opens.
23. In the list of features on the right, select a measurement feature you want to use and click
.
à The feature is added to the Selected Features list on the left.
24. Add all the measurement features you want to use for the regions of the current class and
click OK.
25. To define measurement features for all regions, under Features of All Regions, click Se-
lect.
à The feature selection dialog opens.
26. In the list of features on the right, select a measurement feature you want to use and click
.
Prerequisite ü You have created an analysis setting for a 3D analysis, see Creating a 3D Image Analysis Set-
ting [} 525].
ü You have opened the image you want to analyze.
1. On the Analysis tab, open the Image Analysis tool.
2. For Setting, select the setting you have created for your 3D analysis.
3. Click Start Analysis.
à Your image is analyzed based on your setting.
à The 3D view opens to display your 3D volume and the objects resulting from the image
analysis, with the Analysis Objects Table containing the data for the analyzed objects.
The objects are displayed in the color that was assigned to the respective class in the
analysis wizard.
4. Click File > Save. Alternatively, press Ctrl + S.
à The analyzed image is saved under its current name. The resulting objects are saved in
the czi file as well.
Prerequisite ü You have opened an image where a 3D analysis has been performed, see Performing and
Saving a 3D Analysis [} 527].
1. Open the 3D view for your image.
à The 3D view opens to display your 3D volume and the objects resulting from the image
analysis, with the Analysis Objects Table containing the data for the analyzed objects.
The objects are displayed in the color that was assigned to the respective class in the
analysis wizard.
2. If you do not see the result table, in the Analysis tab below the image, activate Show
Analysis Objects Table. To be able to see the objects, also make sure that is acti-
vated in the bottom tool bar of the viewer, see Tool Bar (Bottom) [} 532].
3. In the Analysis Objects Table on the right, in the Classes dropdown, select the object
class you want to examine.
à The table and view are updated accordingly.
4. If you want to highlight certain objects in the view, activate the checkbox for the respective
entry in the Analysis Objects Table.
5. Use the controls of the tool bars and 3D specific view options to interact with the image.
6. To export your result table, in the Analysis Objects Table, click Export Table.
à A file browser opens.
7. Navigate to the desired location and click Save.
à The currently displayed entries of the Analysis Objects Table are exported and saved as
a .csv file.
8. To export the image to arivis Pro, in the bottom tool bar click .
à The image is exported to arivis Pro and is displayed with almost identical rendering set-
tings.
This tab is only available if you have opened an image with objects from the 3D Image Analysis.
Parameter Description
Show Analysis Ob- Activated: Displays the analysis objects table on the right side of the
jects Table 3D view.
This table is only available if you have opened an image with objects from the 3D Image Analy-
sis, or if you have run an 3D image analysis and is displayed by default. It can be switched off by
deactivating Show Analysis Objects Table in the Analysis tab. The table displays the resulting
objects of an image analysis and allows you to manipulate, filter and export the objects. The table
can be sorted by one column when clicking into the table header. A little arrow in the header indi-
cates if the sorting order is ascending or descending.
When you select an entry, the respective object is highlighted in the 3D view, and vice versa. You
can also select multiple entries when pressing the Ctrl key while clicking, or you can select a
range of entries by pressing the Shift key while clicking on the first and last entry of the range. All
corresponding objects are highlighted in the viewer. You can also undock the table by clicking the
undock icon in the top right and then move the table freely on your screen.
Parameter Description
Classes Selects the class for which the objects are displayed in the table.
Objects Table
– Features The various features you added in the Features step of the analysis
wizard are displayed in this table, like the Volume and Parent ID.
See also
2 Undocking/Docking Tool Windows [} 40]
2 Analysis Tab [} 528]
6.7 3Dxl
This module enables you to visualize 3D or 4D image data. It provides up to three clipping planes
and features five different rendering methods, including an improved transparency mode for bet-
ter visualization of dense structures, such as EM, XRM and dense fluorescent data. Time series
(4D) movies, the generation and export of movies as well as tools for interactive 3D measure-
ments are also included. 3Dxl offers a bridge functionality and sample pipelines to send data to
arivis Pro with saved settings for fast and easy 3D analysis. For full functionality, the 3Dxl module
requires a dedicated current graphic card with full OpenGL support (NVIDIA recommended, AMD
possible).
Prerequisite
ü The Rotation mode in the left tool bar is selected.
6.7.2 3D View
1 2
1 Tool Bars
With the toolbars on the left, right and bottom of the image area you can directly con-
trol and move the 3D volume, see Tool Bars [} 530].
2 3D View
The 3D view displays z-stack images three-dimensionally as a 3D volume and objects re-
sulting from a 3D image analysis (if you have the required license for 3D Image Analysis
and toggled their visibility). Selecting an analysis object in the view highlights the respec-
tive row in the table. You can also select multiple objects by pressing Ctrl while clicking
on the objects.
4 Summary Table
This table is only visible if you have opened an image with objects from the 3D Image
Analysis. This table displays a summary of information about the objects of the class se-
lected in the Analysis Object Table. The displayed features are the ones selected for all
regions in the Feature step of the analysis wizard.
5 View Options
In this area you have your 3D specific view options with parameters to adjust the appear-
ance and further settings of the 3D volume.
The tool bars are arranged to the left and right of the image area and underneath it. You can use
the tools to control and adjust the display of the 3D volumes in the image area.
Parameter Description
Zooms in or out of the 3D image.
Enables you to select end points of measurement tools that have been
drawn into the 3D image (Measurement tab). You can then edit the
Select position of the end points.
Enables you to rotate the 3D image in any way you wish within the
space. This is the default mode when you switch to 3D view for the
Rotate first time.
Enables you to increase or reduce the zoom factor of the image area.
Zoom
Move
Enables the flight mode. This mode allows you to virtually fly through
Fly the 3D image. Use the keys from the list below to control your flight.
Bottom Thumb
Wheel
Key Function
W Forward
S Backward
Key Function
A Left
D Right
Space Up
C Down
E Rotate (clockwise)
Q Rotate (counter-clockwise)
Parameter Description
Toggles the visibility of the X/Y clipping plane.
Only active if a position list containing at least two saved positions ex-
ists.
Play Plays back a preview of the series that is calculated. To stop the pre-
view, click on the button again.
Enables the spin mode. This allows to set the 3D volume in continu-
ous motion. For a short description on how to use the spin mode, see
Spin Mode
Animating the 3D Volume [} 529].
Rotates the 3D volume around the (Z) axis perpendicular to the screen
plane.
Right Thumb
Wheel
6.7.2.2 3D Tab
Here you can specify which projection/rendering mode you want to use to display the 3D volume.
There are 5 view modes available. To activate the desired view mode, click on the corresponding
button. An activated button (respectively the mode) appears in blue color.
Parameter Description
Transparency Activates the Transparency rendering mode.
Update on com- Only available, if Follow Acquisition on the Dimensions tab is acti-
pleted frame vated.
Activated: During acquisition, the 3D View is only updated after the
acquisition of a stack has been completed.
Deactivated: While the acquisition of a stack is still ongoing, the 3D
View is continuously updated. However, with very fast acquisition this
update cannot be guaranteed.
Toggle Clipping By activating or deactivating the buttons you can show or hide the
Planes corresponding clipping planes in the 3D volume.
If you right-click on an activated button, a shortcut menu opens. Here
you can select whether you want the back (Clip Back), front (Clip
Front) or both sides of the 3D volume to be clipped. You can also
specify the Style of the clipping plane. Under each button is a slider.
You can use this to move the relevant clipping plane within the vol-
ume.
Parameter Description
Wedge Activated: Activates two texture planes. Only the sector between the
planes is cut out. You can select which planes you want to be used
for the wedge function from the dropdown list. The selection is also
visible in the relevant buttons.
– New Creates a new settings file that is given a name automatically and has
the file extension *.cz3dr. The settings file can be found in the user
path under \My Documents\Carl Zeiss\ZEN\Documents\3Dxl render
settings.
– Delete Deletes the selected settings file from the hard drive.
– Rename Renames the selected settings file. Enter a new name in the input field
and confirm with OK.
Create Image Creates a new image from the current view. This image is a 24 bit
RGB color image. All graphic elements, such as annotations, are burnt
in. In the dropdown left of the button you can select the resolution
for the image that is created.
Here you can define the appearance of the 3D volume. On the tabs available on this tab, select
the setting that you want to change (e.g. Transparency). Depending on which mode you have ac-
tivated on the 3D tab, different tabs and parameters are available.
Parameter Description
Here you can select the channel of a multichannel image for which
you want to set the transparency.
Channel selection
Threshold Sets the lower threshold value in percent of the gray levels displayed.
With this setting you specify the gray value range for the relevant
channel that you want to be included in the rendered image.
Ramp Sets the extent of the transition from completely transparent to com-
pletely opaque (0-100 percent).
Parameter Description
Histogram Displays the settings that you enter using the sliders schematically.
The X axis represents the gray level values and the Y axis the opacity.
You can also change the position of the curve using the mouse.
Parameter Description
Here you can select the channel of a multichannel image for which
you want to adjust the surface settings.
Channel selection
Threshold Sets the lower threshold value in percent of the gray levels displayed.
With this setting you specify the gray value range for the relevant
channel that you want to be included in the rendered image.
Spectacular Light Sets the spectacular light from 0 to 100%. This value influences the
differences between bright and dark structures.
Parameter Description
Background Color Sets the background color for the 3D view. To do this, click on the
color field and select the desired color.
Parameter Description
Brightness Sets the brightness of the light source (from 0 - 100 %).
Parameter Description
Azimuth Here you can enter the angle of the light source above the virtual
horizon.
Elongation Here you can enter the light source's horizontal angle of incidence.
Light source As an alternative to the slider or input field, you can set the Azimuth
and Elongation together by using the mouse to move the point
within the light source display.
Enable Directional Activated: Enables full lighting for volumetric rendering. A directional
Light light illuminates all structures in a scene with parallel light rays from a
specific direction, similar to sun light. The light disregards the distance
between the light itself and the structures, so the light does not di-
minish with distance.
Enable Tone Map- Activated: Enables tone mapping during the rendering of the image
ping data. Tone mapping refers to the compression of the dynamic range
of high contrast images (HDR). The contrast range is reduced in order
to display digital HDR images on output devices with a more limited
dynamic range. In most cases, tone mapping increases brightness and
contrast of the rendering result and makes colors more vibrant.
Parameter Description
View angle Sets the projection angle at which you want to view the scene freely
between 0° and 80°. The effect of this on the perspective display is as
if you are viewing the 3D image through a telephoto or wide-angle
lens.
Scale Z Here you can set the scaling of the volume in the Z direction (value
range 10% - 600%).
Stereo anaglyph Activated: Displays the 3D volume as anaglyphs. You can choose be-
tween a
§ Red/Green display, or a
§ Red/Cyan display.
Camera separation Sets the distance between the two virtual cameras (0-20%).
Parallax shift Sets the degree of movement that is necessary to bring the two cam-
era images back into line (-100 to +100%).
Info
On the Clipping tab you can edit the clipping planes. On the 3D tab you can activate or deac-
tivate the relevant clipping planes in the 3D volume.
Parameter Description
Show All Clipping Activated: Automatically inserts all 3 clipping planes into the 3D
Planes volume. Additionally the editing functions for each clipping plane
were activated automatically.
X/Y
X/Z
Y/Z
The following parameters are only visible if the Activate checkbox is activated and a clipping
plane has been selected.
Parameter Description
Clipping Plane Style Change the display of the selected clipping plane using the drop-
down list to the right of the Activate checkbox. The following
settings are available:
- Colored The plane is displayed in color. The frame color is used with 50%
transparency here.
- Binary The data above the threshold value that are touched by the clip-
ping plane are displayed in binary form as a white area. Black
pixels are non-transparent.
- Transparent The data that are touched by the clipping plane are displayed as
they are in Transparent view mode, but in 2 dimensions. The
ramp for the transparency is linear here. Black pixels are trans-
parent.
Parameter Description
- Textured opaque The display appears as it does with the Textured setting. Black
pixels do not let any light through, however, meaning that the
render data behind them are not displayed.
Outline Activated: Displays the frame of the selected clipping plane. En-
ter the frame color via the color field.
Clip Surface Channels Only visible if Surface or Mixed view mode is activated.
Here you can enter which channel you want to be clipped using
the channel buttons.
Position Here you can enter the position of the selected clipping plane.
<X Here you can enter the X angle for the selected clipping plane.
(X Angle)
<Y Here you can enter the Y angle for the selected clipping plane.
(Y Angle)
Reset Orientation Resets the selected clipping plane to the original position.
Here you can create render series of individual views, which you can later view and export as a
movie. The tab contains different control elements depending on the Render Series type. The fol-
lowing parameters are the same for all render series types: Render Series section, Stored sec-
tion, Apply button and Fixed Resolution checkbox.
Parameter Description
Render Series Here you can select the desired series mode. Depending on the cho-
sen render series type, different parameters are displayed.
- Turn Around Define the start/stop angle and the rotation direction around the X
X axis.
- Turn Around Z Define the start/stop angle and the rotation direction around the Z
axis.
- Start/Stop Define the angle and zoom settings for the start and end position of
your series. The intermediate positions are interpolated evenly.
- Position List Define any number of positions. The positions can each have com-
pletely different rotation, zoom and illumination settings.
Parameter Description
- Over Time Only visible in the 2.5D view.
Define the start time point and end time point for a series. All other
settings (rotation, zoom, etc.) remain unchanged.
Apply Calculates the series. A new image document is opened in the Center
Screen Area. You can view the series by clicking on the Play button
in the Dimensions tab.
Clicking on the button opens a shortcut menu with the following op-
tions:
Options
- Save As Saves a copy of the currently selected settings file under a different
name.
Preview
Shows a preview of the series to be created. Use the Play/
Stop button to start or stop the preview.
Frames Sets the number of individual frames that the series consist of after
the calculation. The more individual images that you specify here, the
more fluidly the scene transitions will be displayed later. Select prede-
fined values from the dropdown list (e.g. 20 or 100 frames).
The following parameters are only available if you have selected Turn Around X/Y/Z under Ren-
der Series:
Info
4 The X rotation, Y rotation and Z render series types all have the same control elements and
differ only in the axis around which the rotation is calculated.
4 The preview function is not available for these types of series.
Parameter Description
360° Panorama Select 360° panorama, if you want to generate a complete rotation
series.
Partial Panorama If you select partial panorama, you can specify the starting angle and
stopping angle that you want to use for the series. To do this, enter
the desired values in the input fields or adjust it in the graphical repre-
sentation of the rotation circle at the right of the input fields.
- When you are configuring a partial panorama, the desired angles can
also be determined easily using the circular control element:
Grab the white start/stop points with the mouse and position these
accordingly on the circle. The number of individual images is also dis-
played here.
Angle Definition
The following parameters are only available if you have selected Start/Stop under Render Se-
ries:
Parameter Description
Start Position You can position the volume in the image area as required using the
mouse. The geometric parameters are displayed in the input fields.
You can also determine the Camera Position and the Look At pa-
rameters for X, Y or Z and the angle directly using the input fields or
the slider. All changes are displayed immediately in the image area.
Stop Position You can position the volume in the image area as required using the
mouse. The geometric parameters are displayed in the input fields.
You can also determine the Camera Position and the Look At pa-
rameters for X, Y or Z and the angle directly using the input fields or
the slider. All changes are displayed immediately in the image area.
The following parameters are only available if you have selected Position list under Render Se-
ries:
Parameter Description
Add Adds the current position to the position list.
Parameter Description
Position list Each position is displayed in the list with its X, Y, Z angle and zoom
level.
Using the control elements at the bottom of the list you can change
the order of the positions (Arrow buttons), cut positions (Scissors
icon) or copy and paste them again at another position (Copy/Paste
icons).
If you want to delete all positions, click Clear List.
Additional Param- You can determine which of the following parameters you want to be
eters taken into consideration when the series is calculated.
To do this, activate the corresponding checkbox:
- Time Includes time series parameters (only active for time series images).
- Camera Includes camera settings, e.g. viewing angle (from the 3D/virtual cam-
era).
- Surface Includes surface settings (only active in Surface and Mixed mode).
Parameter Description
Tool bar
- Changes the mouse pointer to Selection mode. Use this to select mea-
surements in the 3D volume in order to change them.
Select
Parameter Description
- Use this to measure the length of a line in µm. Click once on the start-
ing point and hold down the mouse button. Then drag the mouse to
Line
the end point and release the mouse button again. The measurement
is complete. The result of the measurement is displayed in the list to
the right of the image area.
- Use this to measure the angle between two connected legs. First de-
fine the starting point. Then use the mouse to drag the first leg to the
Angle
desired first end point. Define the second leg by clicking on the sec-
ond end point. The angle measurement ends with a display of the an-
gle measured (in degrees). The result of the measurement is displayed
in the list to the right of the image area.
- Use this to measure along a line with any number of segments. Click
from corner point to corner point. Complete the measurement by
Polygon Curve
right-clicking. The result of the measurement is displayed in the list to
the right of the image area.
- Color selec- Here you can select a color for the tool you want to draw in. Simply
tion click on the colored rectangle and choose a color from the list.
- Auto Color Activated: Automatically changes the color of the drawn-in tool.
Parameter Description
Show Measure- Activated: Shows the measurements in the 3D volume or in the list
ments of measured values at the right of the image area.
Display Values
- as list Activated: Displays the measured values in the measurement data ta-
ble.
Delete Selected Only active if a measurement tool has been selected in the 3D vol-
ume.
Deletes selected measurement tools from the 3D volume.
Parameter Description
Tool bar
Parameter Description
- Changes the mouse pointer to Selection mode. Use this to select mea-
surements in the 3D volume in order to change them.
Select
- Use this to measure the length of a line in µm. Click once on the start-
ing point and hold down the mouse button. Then drag the mouse to
Line
the end point and release the mouse button again. The measurement
is complete. The result of the measurement is displayed in the list to
the right of the image area.
- Use this to measure the angle between two connected legs. First de-
fine the starting point. Then use the mouse to drag the first leg to the
Angle
desired first end point. Define the second leg by clicking on the sec-
ond end point. The angle measurement ends with a display of the an-
gle measured (in degrees). The result of the measurement is displayed
in the list to the right of the image area.
- Use this to measure along a line with any number of segments. Click
from corner point to corner point. Complete the measurement by
Polygon Curve
right-clicking. The result of the measurement is displayed in the list to
the right of the image area.
- Color selec- Here you can select a color for the tool you want to draw in. Simply
tion click on the colored rectangle and choose a color from the list.
- Auto Color Activated: Automatically changes the color of the drawn-in tool.
Parameter Description
3D Measurements All measurement contained in the 3D volume are displayed here. The
list contains the following columns:
- Eye icon Here you can select whether or not a measurement tool is displayed
in the image. If you click in the title field of the column, the setting is
made simultaneously for all entries.
- Type Displays the type of a tool. If you click on the icon, you can change
the color of the tool.
- A No function.
- Name Displays the name of the tool. To change the name, double-click on
the entry, enter a new name and confirm by pressing Enter.
This module offers functionality to combine 3D and 2D visualization in one screen, enabling the
user to render up to three 2D and one 3D view panes together in one new viewer (Tomo3D view).
The 3D view features ray-casting-based volume rendering with transparency, volume and maxi-
mum intensity modes, flexible channel-wise adjustment of 3D view, background color and light-
ing. The position of the three orthogonal 2D view panes are synchronized with the 3D view, can
be interactively positioned and are indicated by colored cut lines.
The Tomo3D view combines a 3D viewer with up to three orthogonal 2D views. The Tomo3D is
only available if you have loaded or acquired a z-stack.
1 2
1 Left Toolbar
Toolbar to manipulate the image display. For more information, see Left Toolbar (To-
mo3D) [} 546].
2 Image View
Area where you interact with the image views and set the cut lines with the mouse. You
can display up to four different views, including a 3D and different 2D views. The num-
ber of image views can be set in the Ortho Display tab, the type can be set by the drop-
down in the top left corner of each view. The individual views can be synchronized with
the Synchronize View Area of 2D Panes checkbox in the Dimensions tab.
3 Snap button
Creates a 2D image of the currently displayed image views. All annotations are burned in
automatically.
4 Bottom Toolbar
Toolbar to manipulate the image display, see Bottom Toolbar (Tomo3D) [} 546].
5 View Options
Area for general and specific view options.
Parameter Description
Zooms in or out of the image.
Enables you to rotate the 3D image in any way you wish within the
space.
Rotate
Enables you to increase or reduce the zoom factor of the image area.
Zoom
Panning
Bottom thumb
wheel
Rotates the 3D volume around the (Z) axis perpendicular to the screen
plane.
Right thumb wheel
Parameter Description
Cut Lines Sets the positions (pixel values) for the section lines using the X/Y/Z
sliders or input fields.
Alternatively you can also adjust the positions directly in the image
area. To adjust the positions, move the mouse over a section line in
the image. Hold down the left mouse button and move the mouse.
Cut Line Opacity Only visible if the Show All mode is activated.
Here you can enter the degree of opacity of the section lines from 0%
(invisible) to 100% (completely opaque).
Parameter Description
- Displays three horizontally separated views.
Here you can define the appearance of the 3D volume. On the tabs available on this tab, select
the setting that you want to change (e.g. Transparency).
Parameter Description
Here you can select the channel of a multichannel image for which
you want to set the transparency.
Channel selection
Threshold Sets the lower threshold value in percent of the gray levels displayed.
With this setting you specify the gray value range for the relevant
channel that you want to be included in the rendered image.
Ramp Sets the extent of the transition from completely transparent to com-
pletely opaque (0-100 percent).
Histogram Displays the settings that you enter using the sliders schematically.
The x axis represents the gray level values and the y axis the opacity.
You can also change the position of the curve using the mouse.
Parameter Description
Background Color Sets the background color for the 3D view. To do this, click on the
color field and select the desired color.
Parameter Description
Brightness Sets the brightness of the light source.
Here you can specify which projection/rendering mode you want to use to display the 3D volume.
There are three view modes available. To activate the desired view mode, click on the correspond-
ing button.
Parameter Description
Transparency Activates the Transparency rendering mode where the structures in
the image are rendered in a similar fashion as in the Volume mode.
This render mode is particularly suitable for dense image data. The
key difference is an applied edge enhancement filter to allow more
focus on relevant structures within the data while simultaneously fad-
ing out homogenous and less important areas.
Volume Activates the Volume mode where the structures in the image are
rendered as three-dimensional objects and illuminated by means of a
virtual light source. This allows a realistic and, in contrast to the maxi-
mum projection mode, a quantitative display of the volume.
Maximum Activates the Maximum intensity projection mode where only the
pixels with the highest intensity are displayed along the observation
axis.
Parameter Description
Resolution Selects the resolution for the image. You have the following options:
§ Current Resolution
§ 720 x 576 (SD)
(SD = Standard Definition)
§ 1024 x 768
§ 1920 x 1080 (HD)
(HD = High Definition)
§ 4096 x 3072 (4K)
The acronym OAD for Open Application Development is a term describing both the OAD platform
on ZEN as well as the process of developing applications on it. The platform has been made avail-
able for our customers to enhance the functionality of ZEN in a flexible way. With OAD typical mi-
croscopy workflows can be integrated into the ZEN software. A short list of OAD highlights:
Macro Interface to access the major functionality of ZEN and its objects and the access to external
libraries like the .Net Framework to significantly enlarge the field of application.
The software offers the following components which we regard as main parts for Open Applica-
tion Development (OAD):
§ Macro Recorder
§ Macro Editor
§ Macro Debugger
§ Macro Interface (Object Library)
§ ImageJ Extension
Basic functionality
All ZEN products (ZEN lite excluded) come with a basic macro functionality which allows to play
existing macros within the software (using the Macro tool).
Info
Within the software you can only run .czmac macro files which are recorded or saved in the
ZEN macro environment. To run your own macros later on they must be located in the folder:
…/User/Documents/Carl Zeiss/ZEN/Documents/Macros.
Licensed functionality
When you have licensed the Macro Environment functionality you will get the:
§ Macro Recorder,
§ Macro Editor and
§ Macro Debugger.
In the Right Tool Area you find the Macro tool [} 979]. The Macro Editor dialog allows you to
generate and work with macros similar to Excel/Word macros. The macro interface is part of ZEN
software and therefore not a separate product. The ImageJ Extension is the first extension for
ZEN and will be free of charge.
User forum
A user forum was established to allow users to exchange macros and to discuss solutions. You
will find a lot of example macros and further documentation there. The user forum can be
reached under www.zeiss.com/ZEN-OAD.
Prerequisite ü You run a licensed version of ZEN software. Note that the macro environment is not available
for the free of charge version ZEN lite.
ü You have a macro file available that you want to run in the software.
1. Copy your macro file in the following folder:
.../User/My Documents/Carl Zeiss/ZEN/Documents/Macros.
2. Start the software.
3. In the Right Tool Area open the Macro tool.
à You see your macro in the list under User Documents.
4. Select your macro.
5. Click on the Run button.
Your macro is executed. You have successfully played a macro in ZEN.
This module enables the analysis of physiological time series data with Ca2+calibration, including
mean ROI measurement. It supports imaging with single wavelength (e.g. Fluo-4) and dual wave-
length dyes (e.g. Fura-2) allows ratio calculations and flexible charting and image display as well
as data table display with data export functionality. It also offers the use of definable switches for
online annotations, changing of acquisition speed and freely configurable TTL triggers. Some
functionality is generally available, for the full set of features you need the dedicated license for
the module, see Licensing and Functionalities of Physiology [} 551].
See also
2 Acquiring Time Series Images [} 390]
Some basic functionality for physiology experiments is generally available in the software, but the
full Physiology (Dynamics) functionality requires a license.
Basic functionality
The basic functionality is available for time series images and time series with z-stack images
opened in the software (excluded ZEN lite), or multi-positions (scenes) where a full time series is
collected sequentially at each position (Full time series per tile Region). Note that for acquiring
time series images, you also need the license for the Time Series module. The functionality gen-
erally available in ZEN includes:
§ Using the MeanROI offline functions to specify user-defined measurement regions (ROIs) af-
ter acquisition of your time lapse experiment and analyze their time-dependent changes in in-
tensity.
§ The functionality to display the intensity curves in charts or export the values in the form of
tables.
§ Definable switches for online annotations and change of acquisition speed. Pausing and refo-
cusing are possible via a live camera view.
Licensed Functionality
If you have licensed the module Physiology (Dynamics) and activated it in Tools > Modules
Manager, the additional functionality includes:
§ The option to calculate online/offline (during/post acquisition) ratios and display a ratio image.
§ Additional display layouts and analysis functions (ROI tracing etc.).
Prerequisite ü You have acquired a time series experiment with one or more channels. The experiment is
open and the first time point is displayed in the 2D view.
1. Select the MeanROI tab from the image view tabs in the Center Screen Area.
à The MeanROI view opens.
You are now prepared to start working with the MeanROI view. The following chapters will
show you the first steps.
Prerequisite ü You are in the MeanROI View or in the MeanROI Setup on Acquisition tab.
1. Go to the Graphics tab in the View Options.
2. Select a tool for drawing in ROIs, e.g. the Polygon tool.
3. Activate the Keep tool checkbox.
à The selected tool remains active after you have drawn in an ROI. This means you can
draw in several ROIs without having to re-select the tool.
4. Using the selected tool, in the image view draw in the objects or regions (ROIs) for which
intensity measurements are required.
à The ROIs are displayed in the list (Annotations/ Measurements Layer) on the Graphics
tab.
à Intensity measurements are performed for each ROI and displayed in the chart area to
the right of the image view.
You have successfully defined measurement regions for the intensity measurement.
Info
Measurement time
Note that the time taken to initially create measurements will vary as some date is cached to
memory. Thus, when a long time series image is opened that already contains ROIs, you might
have to wait briefly until ZEN completes its measurements. The duration depends on e.g. num-
ber of ROIs, number of time points, image size, number of pixels etc.
If objects move laterally in the course of the time series, you can adjust the ROIs at each Time
Point in order to follow the objects.
3. Open the ROI Tracing tab and activate the Enable ROI Tracing checkbox. To edit the po-
sition of a single Key frame change the Key Frame Edit Mode to single mode .
Note that you can only select the key frame edit mode if the frame number is set to a value
>1.
4. Adjust the position of the ROI using drag & drop. To do this, select the ROI in the image
area by pressing the left mouse button. Then move the ROI to the new position and release
the mouse button. Note that rectangular or contour ROIs can also be rotated.
5. If necessary, you can change the shape of a ROI, by right-clicking on a ROI and selecting
Edit Points (e.g. for polygon contours). It is also possible to rotate a ROI if necessary.
Note that if the area of the ROI changes, the mean intensity value will change. For ratio val-
ues in which thresholding is applied, only "valid" pixels will be respected (see Basics of Cal-
culation of Intensity and Ratio Values [} 557]). Ratios can only be performed if you have
the module Physiology (Dynamics).
6. Adjust the shape/ rotation of the ROI, by drag & drop the contour points/ via the rotation
handle.
à Changes to the position and shape of the ROIs are adopted for all subsequent time
points.
7. Repeat the previous steps for all other time points for which you want to adjust an ROI.
For a selected ROI you can see a list of the time points in which its position/shape was
modified. As the distance (in frames) between each key frame can vary, a linear interpola-
tion is used to smoothly progress the ROI through the time points. Alternatively, deactivate
this (constant) or set it to a spline method that may better describe the progression of the
object you are tracing.
You have successfully adjusted the measurement regions to the course of the experiment.
Here you will find out how to adjust the display of the measured intensity values in charts and ta-
bles according to your wishes.
9. Enter the desired values first into the Max and then the Min input fields.
à The minimum and maximum axis values of the diagrams are adjusted. Note that the Y-
axis scaling can be adjusted individually for each chart.
10. To change the unit of the X axis, click on the Fixed button under X Units.
à The dropdown menu for the units is activated. You can now select the desired unit.
11. On the Layout tab, it is also possible to determine if a given channel view and/ or chart
panel should be hidden. This is useful if these do not contain information that needs to be
visible, thus increasing space on the screen for the remaining items. For example, in many
applications transmitted light is used to monitor the specimen, but the intensity information
(chart) is not required. The controls behave in a similar manner to the channel toggles on
Dimensions tab.
You have successfully adjusted the display of the intensity values.
Use this function to subtract background values from the measurement values. A background cor-
rection will allow you to make a better comparison of the magnitude of any fluorescent intensity
changes observed over the time course of an experiment. Determine the background value with
the help of a Background ROI or define a fixed value. Note that the background correction for ROI
is only available if there are at least two ROI defined in the image!
If the ratio calculations are enabled, the background correction parameters are defined on the Ra-
tio tab. The background correction values on MeanROI tab are disabled, i.e. not used in this
case.
Prerequisite ü To calculate ratios (quotient of two fluorescence intensities) and display ratio images, you
need the Physiology (Dynamics) module.
ü You have a suitable image data set open.
ü You are in the MeanROI view on the Ratio tab (view option).
1. Activate the checkbox Enable Ratio Calculation.
2. In the Method dropdown list, select the Single Wavelength (F/F0) entry.
3. In the Calculation dropdown list select the channel for calculating the ratio.
4. In the Reference image (Ft0) setup, define the frames of the time series image from which
you want the reference value Ft 0 to be calculated.
5. Click on the Update button.
à The ratio values are calculated. The ratio image and a diagram for the ratio values are
displayed in the MeanROI view. For very large images (pixels and time points) it might
be necessary to use the Cache Ratio Image function on the Ratio tab, as this will elimi-
nate flickering when playing through the images at speed.
You have successfully calculated a ratio for a single wavelength dye such as Fluo-4.
Prerequisite ü To calculate ratios (quotient of two fluorescence intensities) and display ratio images, you
need the Physiology (Dynamics) module.
ü You have a suitable image date set open.
ü You are in the MeanROI View on the Ratio tab (view option).
1. In the Method dropdown list select the Dual Wavelength or any other of the Ratio Type
formulas present. entry.
2. In the Calculation dropdown lists select the channels for calculating the ratio.
à The ratio values are calculated automatically.
3. Click on the Cache ratio image button to cache all the ratio images of the current time se-
ries with the given ratio calculation parameters.
à The ratio values are calculated. The ratio image and a ratio diagram are displayed on the
MeanROI view. For very large images (pixels and time points) it might be necessary to
use the Cache Ratio Image function on the Ratio tab, as this will eliminate flickering
when playing through the images at speed.
You have successfully calculated a ratio for a dual wavelength dye such as Fura-2.
The ratio calculation in ZEN blue MeanROI/Physiology functions in the following manner:
After you have set up the ratio to your satisfaction (background correction/thresholding), you can
gather/ view all the results using the export functions found on the export tab in MeanROI view.
You can also view a smaller table within the MeanROI view itself by activating the appropriate
layout.
How are your threshold and background values handled for intensity and ratio measurements?
The threshold value will be applied prior to background subtraction if both are active. If this is this
is the case, then for charts/tables the intensity values of any given ROI are handled this way: if a
pixel in the ROI is under the threshold value, it will be ignored for the calculation of the mean
value of the ROI. After that the background is subtracted from the mean value to get the cor-
rected intensity value of the ROI (note if no pixel remains valid in the ROI after the application of
the threshold, the corrected mean intensity is always 0 and hence the ratio is also 0). In the case
of the ratio image, each pixel is "validated" based on the threshold value. If the pixel is above the
threshold, then the pixel value is kept (i.e. is valid), otherwise it is set as NaN (Not a Number,
which is not the same as zero) and is considered invalid). As before, background correction is
done after threshold. If the pixel value is NaN, the ratio pixel value is NaN. If the pixel is still valid
then use pixel value - background value in the ratio calculation. If a negative value results, it is
clipped to 0.
The following example shows how a ratio value is generated based on the applied background
and thresholding values:
Consider a region of interest that is 6 pixels wide by 1 pixel high. The pixels of the region in the
Wavelength 1 image are as follows:
[50, 75, 100, 125, 150, 175].
For the purpose of this example, assume the threshold = 60 for wavelength 1.
ZEN thresholds Wavelength 1 to obtain:
[--, 75, 100, 125, 150, 175].
The pixels of the region in the Wavelength 2 image are as follows:
[25, 25, 25, 25, 100, 100].
For the purpose of the example, assume the threshold = 50. ZEN thresholds Wavelength 2 to ob-
tain:
[---, ---, --- , ---, 100, 100].
ZEN computes the Ratio by only rationing the averaged values for the valid pixels in each individ-
ual wavelength. To recap, the pixels for each wavelength were:
[--, 75, 100, 125, 150, 175] (Wavelength 1)
[---, ---, --- , ---, 100, 100] (Wavelength 2)
The ratio value of this region is calculated by taking into account only the common area, which
corresponds to the area of the sum of the valid pixels common to the two wavelengths (which in
this example is only 2 “valid” pixels).
Using the threshold values as above, the pixels that are used to calculate the ratio average are:
150/100
175/100
which gives a ratio of 1.625.
To get an overview of all the results, including the original values not corrected for their validity in
this manner, use the data table creation function in the MeanROI export tab. This opens, for ex-
ample, a new document or allows you to export the results as a *.csv file. This data table/ *.csv in-
cludes for each ROI the following information/measurements:
<Channel name>_<Region ID>_In- Threshold corrected area within ROI for given
tensityAreaThrs channel.
<Channel name>_<Region ID>_In- Mean intensity of pixels within the geometric area
tensityMean of the ROI.
<Channel name>_<Region ID>_In- Mean intensity of pixels above the set threshold
tensityMeanThrs for the given channel.
Ratio <Region ID> Mean Ratio value derived from common “valid”
pixels.
This is repeated for the second channel, and at the very end you will find the Ratio value. Thus,
the threshold corrected values are provided for each channel (mean intensity and the correspond-
ing area from which this is derived) as well as the ratio value for the common valid pixels. Relative
time, markers, focus values and parameters from the incubation (if configured) are also listed. In
the embedded table in Mean ROI view you will find a summary that gives the following values:
Mean intensity of pixels above the set threshold for each channel (wavelength) and the corre-
sponding ratio values. No Area values are given here.
In the MeanROI view (not the MeanROI setup) you can interact with a table in the following
ways:
§ Scroll up or down or left to right. Time values are given at the far left and at the far right any
markers are shown at the time point they were created. In between you find the values for
each ROI in the first channel, then the second and so on, then the ratio values.
§ If you click on the column header you select the entire column and in your charts, you will see
the trace corresponding to this ROI is highlighted by a thicker line. Multiple columns can be
selected by pressing and holding shift key.
§ If you click any given value in a column, not only will the trace of the corresponding ROI be
highlighted, the images that correspond to this time point will be displayed, and the playhead
(vertical blue line) of all charts will synchronise to this time point. This allows quick and easy
examination of the data/ events.
If you own the Physiology (Dynamics) module you can use the MeanROI setup to specify user-
defined measurement regions (ROIs) before the acquisition of your time lapse experiment and an-
alyze their time‑dependent changes in intensity online during acquisition. Ratios can also be calcu-
lated and displayed online - these are the typical functions used in physiology/ calcium Fura-2 ap-
plications.
Prerequisite ü To perform physiology experiments, you need the Physiology (Dynamics) module.
ü You have created a new experiment, defined at least one channel and adjusted the focus and
exposure time, see also Setting Up a New Experiment [} 51] and Acquiring Multi-Channel
Images with Cameras [} 51].
ü You have licensed the Time Series module and activated it in Tools > Modules Manager.
ü You are on the Acquisition tab.
1. Activate Time Series in the Acquisition Dimensions section.
à The Time Series tool is displayed in the Left Tool Area under Multidimensional Ac-
quisition.
2. Activate Dynamics in the experiment manager.
à The Dynamics tool now is displayed in the in the Left Tool Area under Applications.
à Note that the tool is not available if the Tiles or Panorama dimensions are activated.
Deactivate these dimensions to make the tool available.
3. Set up a time series experiment, see Acquiring Time Series Images [} 390].
4. Open the Dynamics tool.
5. Click MeanROI Setup.
You have completed the general prerequisites for Physiology experiments.
Prerequisite ü You have read the Workflow Physiology (Dynamics) Experiments [} 558] chapter.
1. In the MeanROI setup, select the Online Ratio tab in the view options .
2. Activate the Activate Live Ratio Generation checkbox.
3. In the Method dropdown list select a method for the ratio calculation. If you select the
Single wavelength (F/F0) method for the calculation of the online ratio, you need to de-
fine a reference image. See Calculating the Online Ratio for Single Wavelength [} 560].
4. If you want to use background correction for the calculation of the online ratio, activate the
desired entry under Background Correction. Note that activating Ratio fruitions in Online
Ratio/ Ratio tab deactivates the background correction settings on MeanROI tab, i.e. the
background functions of these tabs are mutually exclusive.
à To allow you to apply a constant background value (Constant entry), an input field, in
which you can enter the desired value, appears in the formula for the ratio calculation.
à The ROI entry can only be selected once you have defined at least two ROI in your im-
age.
5. Under Calculate complete the formula for calculating the online ratio by selecting the de-
sired entries from the dropdown lists and indicating values in the input fields.
6. Activate the Threshold checkbox, if you want to set a threshold in your experiment. For
information on the calculation of the ratio value R in relation to thresholds, see Calculation
of the Ratio Value R [} 560].
7. You can adjust the settings for the ratio calculation as required and the preview image on
the MeanROI setup is adapted accordingly. Press the Snap button if you need to update
the images.
You have successfully activated/adjusted the calculation of the online ratio.
If you set up your online ratio calculation and select the Single wavelength (F/F0) method for
the calculation of the online ratio, you need to define a reference image.
Prerequisite ü You are setting up an online ratio calculation (see also Setting up the Online Ratio Calcula-
tion [} 559]).
ü You have selected the Single wavelength (F/F0) as the calculation method.
1. In the Online Ratio tab, in the input field of Reference Image Set-up, enter the number
of images from which the reference image should be averaged.
2. Click on Define.
The images are acquired, and a reference image is calculated from them. You can now continue
to set up the online ratio calculation [} 559].
In ZEN the calculation of the ratio value R of an individual pixel xy in case of a dual wavelength
dye is determined as followed:
where:
Ch1xy = 0 if (Ch1xy - background of Ch1) < Threshold of Ch1
Ch1xy = Ch1xy - background of Ch1 if (Ch1xy - background of Ch1) > Threshold of Ch1
The same is true for Ch2xy using the corresponding values for background and threshold.
Prerequisite ü You have read the Workflow Physiology (Dynamics) Experiments [} 558] chapter.
Prerequisite ü You are in the MeanROI View or in the MeanROI Setup on Acquisition tab.
1. Go to the Graphics tab in the View Options.
2. Select a tool for drawing in ROIs, e.g. the Polygon tool.
3. Activate the Keep tool checkbox.
à The selected tool remains active after you have drawn in an ROI. This means you can
draw in several ROIs without having to re-select the tool.
4. Using the selected tool, in the image view draw in the objects or regions (ROIs) for which
intensity measurements are required.
à The ROIs are displayed in the list (Annotations/ Measurements Layer) on the Graphics
tab.
à Intensity measurements are performed for each ROI and displayed in the chart area to
the right of the image view.
You have successfully defined measurement regions for the intensity measurement.
Info
Measurement time
Note that the time taken to initially create measurements will vary as some date is cached to
memory. Thus, when a long time series image is opened that already contains ROIs, you might
have to wait briefly until ZEN completes its measurements. The duration depends on e.g. num-
ber of ROIs, number of time points, image size, number of pixels etc.
Here you will find out how to adjust the display of the measured intensity values in charts and ta-
bles according to your wishes.
Use this function to subtract background values from the measurement values. A background cor-
rection will allow you to make a better comparison of the magnitude of any fluorescent intensity
changes observed over the time course of an experiment. Determine the background value with
the help of a Background ROI or define a fixed value. Note that the background correction for ROI
is only available if there are at least two ROI defined in the image!
If the ratio calculations are enabled, the background correction parameters are defined on the Ra-
tio tab. The background correction values on MeanROI tab are disabled, i.e. not used in this
case.
Prerequisite ü You have read the Workflow Physiology (Dynamics) Experiments [} 558] chapter and set up
an experiment in MeanROI Setup.
ü You are on the Acquisition tab.
1. Start your Physiology experiment by clicking on the Start Experiment button.
à The time series experiment is started. The MeanROI View [} 566] (online) opens and dis-
plays the current images and the intensity curves for each ROI measured online. The in-
tensity curves are displayed in the Time Line View and in the diagrams. Note that the
MeanROI view will display at the third time point. This is noticeable when the interval
time is longer. Thus this display delay should fall into the typically base line of this type of
experiments, i.e. prior to the first stimulus of the sample.
2. You can pause the experiment at any time by clicking on the Pause Experiment button
and continue it again by clicking on the Continue Experiment button.
3. The Focus can be adjusted during the experiment. To prevent images that are not sharp
being acquired, pause your experiment and use the Live acquisition button to adjust the fo-
cus. Then continue the experiment. Note that using the Live view only works with experi-
ments run in interactive mode. In triggered acquisition scenarios this is not possible.
4. Adjust the display of the intensity values during the experiment by changing the settings on
the Layout or Charts tab. The unit of the X-axis cannot be changed during the experi-
ment.
5. You can move and change ROIs during acquisition. The changes are adopted for all time
points, see Drawing in and adjusting ROIs. Note that ROI tracing functions (these allow ob-
jects to be followed in XY) are only available after an acquisition.
6. Activate Switches in the Time Series tool during the experiment to perform the corre-
sponding actions.
à Various events, such as the activation of switches or the pausing of the experiment, are
labeled in the Time Line view by markers.
7. On the Dimensions tab deactivate the Follow Acquisition checkbox to analyze the data
acquired up to that point. To do this, select the corresponding time points using the Time
slider, the diagram sliders or the Time Line view slider in the MeanROI view.
8. Change the size of the area marked in blue in the Time Line View to adjust the section dis-
played in the charts (time axis).
You have successfully started the experiment, analyzed it online and influenced it.
If objects move laterally in the course of the experiment, you can adjust the ROIs at any time dur-
ing the experiment in order to follow the objects.
Prerequisite ü To perform the experiment, you need the Physiology (Dynamics) module.
ü You have a Sutter DG4/5 with appropriate excitation filters for Fura-2 and a Fura-2 filter
set in the microscope's reflector wheel.
ü You are on the Acquisition tab.
1. Create a new experiment in the Experiment Manager, e.g. "Physiology Fura-2".
2. Add the channel Fura-2 using Smart Setup.
3. Activate the Time Series checkbox in the acquisition dimensions.
Prerequisite ü You have licensed the Time Series module and activated it in Tools > Modules Manager.
1. Open the Time Series tool.
2. Using the Duration slider and the dropdown list for the unit, specify the duration of the ex-
periment, e.g. 10 min.
3. Using the Interval slider and the dropdown list for the unit, specify the length of the inter-
val between acquisitions, e.g. 1 second.
4. To create interactive switches open the Interactive Switches section in the Time Series
tool. This section is visible only if the Show All mode is activated.
5. Click .
à A new switch is added.
6. Edit the switch by clicking on the arrow to the right of the switch.
à The switch properties are visible.
7. Enter a name, e.g. Fast. Activate the Color checkbox and select a color, e.g. blue. Define
an action to be performed when you activate the button, e.g. As fast as possible.
You have successfully set up a time series and created a switch.
In the Mean ROI view you can draw ROIs and measure their intensity profile after acquiring time
series experiments. The intensity profiles are displayed as charts and can be exported to data ta-
bles.
Info
4 The Physiology (Dynamics) module activates additional features to those of MeanROI for
the offline analysis of physiology experiments, e.g. Ratio functions and ROI tracing.
4 In this view the Image area is always to the left, charting area always to the right. Depend-
ing on which Region layout you have selected in Layout tab, the MeanROI view can have
a different appearance.
1 2
1 Image area
Here you see the images for each channel of the time series and the ratio image (if the
ratio calculation is enabled). The display of images can be adapted in the Layout tab.
2 Charting Area
Here you see the charts for the values of all channels selected in the Layout tab as well
as for the ratio calculation (if it is enabled on the Ratio tab). If a ROI is selected in the im-
age area on the left, the corresponding plot line is highlighted (the plot line is thicker) in
the charts. The lines in the merged channel image can either have the color of the ROI,
or the color of the channel (option available on the Charts tab).
Playhead (blue line)
Indicates the current frame of the time series visible in the image panel(s). The position is
synchronized with the displayed image frame number and vice versa and it can be moved
via drag & drop. The current time point of the visible frame is displayed to the right of
the playhead line in the same time unit as the x-axis of the chart.
3 Table
Displays the values for all channels and regions at the different time points, as well as the
temperature, focus (if present in the image metadata) and information about markers. If
you deactivate a channel in the Visible Charts section of the Layout tab, the corre-
sponding columns are hidden in the table.
This table is synchronized with the image view and the charts. If you select a field in the
table, the corresponding ROI is selected in the image view and the charts (playheads) are
updated accordingly.
5 View Options
Here you have your standard view options as well as specific options for MeanROI, for
example for the Layout or the calculation of the Ratio.
Info
Hover over the plot with the mouse (crosshair). A tool tip appears with details of the intensity
value at this position, ROI ID #, Channel, and time point (in currently set time unit of x-axis).
Note these values (intensity and time) are interpolated. You can visualize the time points alone
a plot by activation of the Show Tick marks function on the Chart tab.
This chart supports similar functions as detailed for charts in Mean ROI.
1 2 3 4
2 Zoom control
The transparent area (blue) is the user definable zoom range. The zoom range translates
into the display range of the x-axis of the other charts displayed above the time line view.
To edit the zoom range hover use the controls at either end. Click and hold the left
mouse to drag and resize the zoom area.
Parameter Description
Background Cor- If you have activated the Live Ratio Generation in the Online Ratio
rection tab, or the Ratio Calculation in the Ratio tab, the background cor-
rection is disabled here and only visible in the Online Ratio/ Ratio
tab. Also note that the correction for ROI is only available if there are
at least two ROI defined in the image!
The following modes are available:
Parameter Description
- None No background correction is performed.
- Constant Allows a user defined numeric value to be entered for both channels
in the spin box.
- ROI Allows to select the background ROI, the determined value will be
channel specific.
- Define Default Defines the current layout and chart setup as the default. The layout
and charts can be changed in the Layout tab.
- Apply Default Displays the default setup for layout and charts.
Parameter Description
MeanROI View Adjusts how the images, charts, and the table will be displayed.
Layouts
- Image and Selects one of three different layouts of how an image together with
Chart a chart will be displayed. If you click on one of the buttons the layout
will be changed.
- Image and Selects one of three different layouts of how an image and a chart to-
Chart with Ta- gether with a table will be displayed. If you click on one of the but-
ble tons the layout will be changed.
Visible Views Selects the channels for which the image should be displayed in the
image area.
- Single Channel Activated: Only one channel can be selected whose image is dis-
View played.
Deactivated: You can manually switch on/off channels whose images
should (not) be displayed.
Visible Charts Selects the channels for which the charts and table columns should be
displayed in the chart area and the table.
- Single Channel Activated: Only one channel can be selected whose chart and infor-
View mation is displayed.
Deactivated: You can manually switch on/off channels whose charts
and information should (not) be displayed.
Show Markers/ Activated: The temporal position of any switches and markers are al-
Switches ways displayed on the charts both during acquisition or post-acquisi-
tion.
Parameter Description
Show Time Line Only visible if you have licensed the Physiology (Dynamics) module.
View Activated: The Time Line View panel is displayed below the other im-
age chart panels of the Center Screen Area. The Time Line View
panel is designed to provide an overview of the experiment whilst al-
lowing the user to examine the detail displayed in the other chart
panels by means of an integrated zoom tool. The Time Line View can
be hidden by unselecting the check box as required both during or af-
ter an experiment.
Show View Cap- Activated: Displays the channel name clearly with the image of each
tions channel in the multichannel view layout.
Deactivated: Hides the channel name of each image in the image
view.
Parameter Description
All Chart Settings Note that a function is active when the button is highlighted in blue.
(X-/Y-Axis)
The settings for X- and Y-Axis (only if Show All is activated) are the
same, see description below. The Y- axis settings are always applied
to the selected chart. The currently selected chart name (channel) is
displayed above the Y-axis settings.
- Auto The scaling of the respective axis is automatic, allowing for an opti-
mal, and appropriate adjusting display of the all values.
- Fixed The upper and lower limit of the axis can be defined using the min
and max spin boxes.
X-Units
- Fixed You can select the desired unit for the x-axis from the dropdown list.
Show Tick Marks Activated: Displays tick marks in the chart. You can set the Form
and Size of the tick marks. The tick marks have to be set per chart.
Use Channel Color Only visible if you display the merged channels chart.
Activated: Displays the lines in the chart in the colors of the channel.
If a channel has no color defined, the chart line is displayed in white.
The colors are synchronized with the channel settings, i.e. they up-
date accordingly if the channel colors are changed.
Parameter Description
Data Table
Parameter Description
- As New Docu- Opens the measurement data table in a new document tab. The table
ment displays all measurement values and area for all ROIs in each channel.
If event markers are present, these are also listed here at the appropri-
ate time points. For a description of the measured parameters, see
Basics of Calculation of Intensity and Ratio Values [} 557].
- Save as *.csv Opens the Save As dialog and allows the measurement data to be
exported as a comma separated value (*.csv) file. The following val-
ues are exported for each ROI and channel: Intensity, area and if
present event markers. The exported values are the raw data without
the subtraction of any background correction.
Ratio image
- As New Docu- Opens the ratio image in a separate new document as a *.czi file (cur-
ment rent Z only).
- Save as Opens the Save As dialogue to save the ratio image directly to a *.czi
file.
This view option is almost identical to the Online Ratio tab in the MeanROI Setup, see Online
Ratio Tab [} 572]. In fact when an experiment is finished the exact same values used for the dis-
play of the online ratio are transferred to the offline ratio tool of the MeanROI view and are
stored with your image for later reference.
For offline ratio assessment the settings can be changed and are applied to the ratio image and
the measured values. For large data sets (high resolution or many time points), a smooth playback
of the ratio image can be achieved by clicking on the Cache ratio image button. This stores the
ratio images temporarily to the computers memory for very fast playback or adjustments of the
time slider. The following descriptions describe the differences on the Ratio tab:
Parameter Description
Ft0= Average in- Only available if the Single wavelength method is selected.
tensity of frame
Defines the frames of the time series image from which the reference
value Ft 0 should be calculated.
The numbers in the input fields refer to the frame number of the cur-
rent experiment in the MeanROI view. The desired frame numbers
can be entered with the buttons or directly by typing a number into
the field. Typically, your experiment should include a baseline of 5-10
images before and/ or in between a simulation/ activation.
– Update Applies the changes of the frame made with the input fields.
Cache ratio image Caches all the ratio images of the current time series with the given
ratio calculation parameters. This is done to avoid image flickering
when moving through the time series with the slider quickly.
Parameter Description
Calculation Drop- Selects the ratiometric method you want to use. Single and Dual
down Wavelength dyes are supported with an additional three formulas
for further adapted (online or offline) image ratio calculations. The ra-
tio set-up will change in accordance with your selection.
- Single Wave- Select the channel in the dropdown menu. The Ft0 value is the aver-
length aged fluorescence from the specified number of image frames. The
number of frames to average is defined in the spin box of the refer-
ence image set-up (see 10). The spin box at the far left is a multiplica-
tion factor.
- Dual Wave- Select the channels in the dropdown list required to calculate the ratio
length values/image e.g. for Fura-2, a dual excitation dye, the numerator is
the 340 nm image the denominator the 380 nm image. For dual emis-
sion dyes the function is identical. The spin box at the far right is a
multiplication factor.
- Image Ratio The formula calculates the normalized ratio of the difference between
Type 2 two weighted channel intensities.
- Image Ratio The formula calculates the ratio between the weighted difference and
Type 3 weighted sum of two channel intensities
- Image Ratio The formula calculates the ratio between the intensity difference of
Type 4 two channels in relation to the intensity of one channel.
- Constant Allows a user defined numeric value to be entered for each channel in
the appropriate spin box.
- ROI Allows to select the background ROI defined in the Mean ROI view/
setup.
Note that for dual wavelength protocols the same ROI is used in each
case, but its channel specific values are applied for the correction.
For the ratio types Single Wavelength, Image Ratio Type 2, 3, and
4, no ROI background correction is available.
Color Select the color (LUT) used to display the ratio image. Per default the
Rainbow LUT is used as it allows intensity changes to be followed eas-
ily.
Enable Threshold Activated: Allows the threshold values to be set for the ratio calcula-
tion.
Parameter Description
cells or near cell boarders during the ratio calculation. Enter the de-
sired threshold value for each channel into the spin boxes provided.
For more detailed information on how ZEN handles thresholds, see
Basics of Calculation of Intensity and Ratio Values [} 557].
This tab is only available if you have licensed the Physiology (Dynamics) module.
ROI tracing allows you to adjust the position of your ROI as necessary to accommodate the lateral
movement of an object in an image of which the mean intensity is to be measured. This is done
by defining a series of one or more so called key frames for individual ROIs. In this manner, com-
plex object movements can be corrected.
Parameter Description
Enable ROI Tracing Enables the functionality for ROI tracing.
Selected ROI Displays the number and shape of the currently selected ROI.
– Manipulates the ROI for the currently selected time point and creates
a key frame. Note that a single key frame adjustment can only be per-
Single Key
formed when the frame number (time point) is set to 2 or higher.
Frame
– Constant Does not interpolate the ROI position between the key frames, i.e. the
ROI is only present at the set key frames.
– Linear Determines the ROI position at the time points between key frames
based on linear interpolation.
– Spline Determines the ROI position at the time points between key frames
based on spline interpolation.
Key frame list Displays the key frames of the ROI and the changes compared to the
previous key frame.
Show
– Trajectories Activated: Displays the trajectories between the key frames in the
image.
Parameter Description
– Ticks Activated: Displays ticks for the (center) position the ROI for each
time point in the image. These ticks are only visible if Linear or Spline
is selected as interpolation method.
– Ghosted key Activated: Displays the shape of the ROI at each key frame.
frames Deactivated: Displays the shape of the ROI only at the currently se-
lected time point.
See also
2 Adjusting ROIs for Time Points [} 553]
FRAP (Fluorescence Recovery after Photobleaching) enables you to analyze time series acquisitions
with bleach events to determine the half time of recovery/decrease of fluorescent signals. It sup-
ports mono or bi-exponential fit algorithms, including options for background correction and cor-
rection of imaging-induced photobleaching. You also have the possibility to evaluate grouped re-
gions of interest. Additionally, you can determine the fading factor from a reference region (Ref.)
from the present experiment or a control experiment and reuse it for subsequent experiments.
The FRAP view (FRAP = Fluorescence Recovery after Photobleaching) is only visible for a time se-
ries data set which includes a minimum of one bleach event.
Note: When acquiring Airyscan SR or Airyscan MPLX data, the FRAP view only works as expected
if the complete time series date is processed.
1 2
1 Diagram
Displays the intensity-over-time diagram with the fitted curve per channel.
2 Image
Displays the time series image with the drawn in regions of interest.
3 Data Table
The table displays the fit parameters for each channel and analysis region group (one or
more regions of interest can be grouped for analysis, up to three groups can be ana-
lyzed).
Optionally, you can have a table of the average intensity values of each region group
(corrected for background and reference) for each time point and channel.
Any changes performed with this tab are immediately updated in the FRAP view (diagram and ta-
ble).
Parameter Description
Toolbar Offers tools to draw regions of interest into the image. These regions
are then used for analysis.
– Activates the selection mode that enables you to select the graphic el-
ements in the image area.
Select
– Activates the cloning mode that you can use to create an identical
copy of the last graphic element drawn in by simply clicking anywhere
Clone
into the image area. To exit this mode, either switch back to the se-
lection mode or press the ESC key.
– Enables you to draw a rectangular region of interest (ROI) into the im-
age.
Draw Rectan-
gle
– Enables you to draw a spline contour. You can either define the cor-
ner points by a series of clicks or you can trace a contour by keeping
Draw Spline
the left mouse key pressed. Close this contour by right-clicking. Cor-
Contour
ners are always rounded with this tool.
Parameter Description
Fit Formula Selects the mathematical model (mono or double exponential model)
for data fitting.
Diagram Line Selects the line thickness of the fit curve(S) in the diagram.
Parameter Description
Table Activated: Displays an additional table which shows the intensity val-
ues (for each channel) per time point. The values are corrected based
on the background and reference region, if applied.
Diagram Marker Selects the time point markers of the fit curve(s) in the fit diagram.
Time Unit Selects the time units of the fit diagram, the fit data table and the in-
tensity table.
Fit Range Defines the data range for the fit algorithm.
Photofading Fac- The photofading factor is either calculated from the current reference
tor region(s) or loaded from a previous experiment. The input field shows
the currently applied value.
The photofading factor = reference(t)/reference(t=0) and then fitted
by: intensity(t) = exp(-kappa * t).
– Use Activated: Uses the photofading factor for the data fitting algorithm.
In the input field, you can enter the photofading factor.
– Load Opens a file browser to load the photofading factor from an .xml file.
– Save Opens a file browser to save the current photofading factor as an .xml
file.
– Group 1, 2, 3 Allows you to select more than one region for analysis and to group
them. The mean intensity values for the combined regions are used as
data input.
– Background Activated: Defines the region of interest which represents the mean
background intensity that should be used for data correction. The
mean intensity value of the background region is subtracted from the
data prior to fitting.
– Ref. Activated: Defines the region of interest which represents the fluo-
rescence intensity of a reference area that has not been bleached and
has not been affected by the bleach event. The mean intensity within
this region is used to correct the data at each time point for any
bleaching artifact that occurred during the imaging process. For this
the data is divided by [reference(t)/reference(t=0)].
§ The final signal intensity in the analyzed ROIs following recovery IE (of the fitted curve)
§ The amplitude of the fitted curve (which equals the mobile fraction) I1 mobile fraction
§ The proportion of the mobile fraction: F1 mobile fraction (%)
§ The fitted parameter T1(s)
§ The half time of recovery T1 half (s)
§ The rate constant for the exchange of molecules between the bleached region and the sur-
rounding area K1 (1/s)
§ The part of the immobile fraction of the protein I delta immobile fraction
§ The proportion of the immobile fraction: F1 immobile fraction (%)
A double exponential displays the mean of the fitted values for the two different mobile fractions
as the fit curve. The following (additional) parameters are provided
§ The amplitude of the two curves, displayed as one (which corresponds to each part of the
mobile fractions) I1 and I2.
§ The fitted parameters T1 (s) and T2 (s) for each mobile fraction.
§ The rate constant for the exchange of molecules between the bleached region and the sur-
rounding area K1 (1/S) and K2 (1/s) for each mobile fraction.
§ The half time of recovery for each fraction T1 half (s) and T2 half (s)
The table displays the result of the data fit. The result can be copied to the clipboard (right mouse
click) and directly pasted into excel or saved as text.
The calculation of the parameters is based on the bleached ROI(s) unless other ROIs or selected or
the ROIs are moved.
The analysis is then part of the image and the type of analysis is displayed again when opening
the FRAP view for the image.
This module is exclusively available for the Celldiscoverer 7 and allows automated photoactiva-
tion and bleaching at multiple positions. It is not applicable to Tile Regions. Using this module, the
system executes the following experiment steps without user interaction:
Automated Photomanipulation offers you the possibility to save your whole experiment setup in a
settings file.
See also
2 Automated Photomanipulation Tool [} 579]
2 Performing an Automated Photomanipulation Experiment [} 578]
Prerequisite ü You have activated the Automated Photomanipulation module in Tools > Modules Man-
ager > Automated Photomanipulation.
ü You have defined and saved a suitable experiment for photomanipulation at multiple posi-
tions (including Tiles, Bleaching, and Time Series).
ü You have defined experiment positions in the Positions section of the Tiles tool, see Posi-
tions Section [} 363].
Important: Photomanipulation with Tile Regions is not supported!
ü On the Acquisition tab, you have selected if the photomanipulation should be executed at
All Tile Regions per Time Point (e.g. for photoactivation) or as Full Time Series per Tile
Region (e.g. for photobleaching).
ü You have defined photomanipulation settings in the Timed Bleaching tool, see Timed
Bleaching Tool [} 989].
ü You have defined a suitable image analysis setting using the Image Analysis Wizard or an
OAD macro that detects the regions of interest where the photomanipulation should be exe-
cuted, see Creating a New Image Analysis Setting [} 403].
Important: The classes in the analysis and their corresponding channel names must exactly fit
to the channel names in the experiment!
1. On the Applications tab, open the Automated Photomanipulation tool.
2. Create a setting to save your experiment setup, see Using Automated Photomanipulation
settings [} 577].
3. For Experiment, select the experiment you want to use for photomanipulation. For more
information, see Setting Up a New Experiment [} 51].
Note: You have to acquire a snap and draw one Experiment Region. Bleaching must be
activated. This enables you to define the settings for Timed Bleaching. This Experiment
Region is not used for the photomanipulation experiment.
4. For Analysis, select a suitable setting to analyze the multi-position image.
5. For Class, use the drop-down list to select the appropriate class or channel to identify the
photomanipulation regions of interest.
6. For Sorting Feature, define the acquisition order of the ROIs for photomanipulation. This
step depends on the selected features in the analysis (e.g. ID, mean intensity in channel x,
perimeter, etc.). Per default, the photomanipulation per position is executed in the de-
scending order of the IDs.
7. For Region Type, define the ROI shape and maximum ROI number per position for pho-
tomanipulation.
8. For Output Folder, define the folder where you want to store the experiment data.
The complete experiment including all positions and photomanipulation events is stored as
one .czi file. This folder also contains the first scanned image for ROI identification (Initial-
AnalysisSettingImage.czi) and the table of the ROIs (SingleObjectsTable.csv).
9. Click on Start.
à A InitialAnalysisSettingImage.czi image is acquired to identify the ROIs for photomanipu-
lation and saved to your folder. The ROIs are listed in the SingleObjectsTable.csv table.
They are automatically imported as Experiment Regions for photomanipulation.
à Then the photomanipulation experiment at multiple positions is executed.
à In the Mean ROI tab of the resulting .czi file, the bleach markers are shown.
à To check the ROI selection, you can analyze the InitialAnalysisSettingImage.czi with the
predefined analysis.
à The Start button turns into a Stop button as long as Automated Photomanipulation is
running. Click on Stop to stop the running Automated Photomanipulation workflow.
You have successfully performed an Automated Photomanipulation experiment.
Parameter Description
Options
– Save as Saves the current setting under a new name. Enter a name for the
setting.
Parameter Description
Experiment Selects the experiment setup to execute the photomanipulation in-
cluding Tiles, Time Series and Bleaching.
Analysis Selects the image analysis setting used to analyze the multi-position
image to define the Experiment Regions for photomanipulation.
Sorting Feature Defines the acquisition order of the Experiment Regions for photoma-
nipulation depending on the selected features in the analysis (e.g. ID,
mean intensity in channel x, perimeter, etc.).
Region Type Defines the shape of the photomanipulation area: Polygon, Circle,
Rectangle, Custom Circle/Rectangle (Size and Offset adjustable), Cir-
cle, and Rectangle are bounding/containing the detected photoma-
nipulation regions.
Max. Number Defines how many photomanipulation ROIs per position should be ex-
ecuted.
Output Folder Selects the folder where the experiment results are saved. A subfolder
will automatically be created for each run of an Automated Photoma-
nipulation experiment, including the pre-bleach image for Experiment
Regions identification, the table of all Experiment Regions, and the ex-
periment results.
See also
2 Automated Photomanipulation [} 577]
6.13 FRET
Förster Resonance Energy Transfer (FRET) is a mechanism which describes an energy transfer
between two chromophores, typically fluorescent proteins. Upon excitation with light of suitable
wavelength a donor chromophore, then in its electronic excited state, can transfer energy to an
acceptor chromophore. The efficiency of this energy transfer depends on the distance between
the two molecules. This makes FRET an indicator for even very small changes in the distance be-
tween two molecules.
When using confocal or super resolution techniques, FRET is typically used to determine whether
two fluorophores are within a certain distance of each other and whether this distance changes
due to external or internal influences.
FRET occurs between donor (D) and acceptor (A) dyes when these dyes are within certain proxim-
ity (< ca. 10 nm), resulting in a FRET signal (acceptor fluorescence upon donor excitation). There
are many approaches to quantify FRET signal. In ZEN, the following methods are implemented:
§ Sensitized Emission
§ Acceptor Bleaching
Different acquisition set ups are necessary for each approach to enable the FRET view.
An analysis with Sensitized Emission requires a multichannel (multidimensional) data set (mini-
mum 3 channels). The acquisition settings are determined by the used fluorophores. Refer to sci-
entific papers for more information on sample preparation and necessary controls for sensitized
emission experiments.
An analysis with Acceptor Bleaching requires a time series with a bleach event. The acquisition
settings are determined by the used fluorophores. Refer to scientific papers for more information
on sample preparation for acceptor bleaching.
Depending on the data set, the FRET view provides different parameter sets for image analysis.
To access the FRET view, the data set needs to be loaded or completely acquired. Online calcula-
tion of FRET data is not supported.
Acceptor photobleaching analysis is available when a bleaching event is found in the image meta-
data. In this case the FRET efficiency can be calculated the following way:
E = (Dpost – Dpre)/Dpost
D is the donor signal and subscripts refer to pre- and post-bleach images.
In all cases the FRET values are calculated for every pixel that is considered “valid” and then the
mean value is calculated and presented in the table. In case of acceptor photobleaching, the effi-
ciency is also calculated from the pixel-averaged Dpre and Dpost signals.
Sensitized emission analysis is available when the image has at least three channels (donor, accep-
tor, FRET; corresponding measured signals are D, A, and F). The measured intensities are first cor-
rected for crosstalk, for which the following factors are considered: FD/DD; AD/FD; DA/AA; DA/FA; FA/
AA. The capital letters refer to channels and subscripts refer to the D and A dyes; for instance, FD/
DD is the F/D signal ratio measured for a donor-only sample. In addition, unequal detection effi-
ciencies of the donor and FRET channels are considered (G factor).
In all cases the FRET values are calculated for every pixel that is considered “valid” and then the
mean value is calculated and presented in the table.
Fc = F - D*(FD/DD) - A*(FA/AA)
Displays the Fc image with intensities converted from the FRET index calculated for each pixel us-
ing the Youvan method. This method assumes that the signal recorded in the FRET channel is the
sum of real FRET signal overlaid by donor crosstalk and acceptor signal induced by direct (donor)
excitation. There is no correction for donor and acceptor concentration levels and as a result the
FRET values tend to be higher for areas with higher intensities.
See also
2 Crosstalk Correction for Gordon and Xia Methods [} 582]
See also
2 Crosstalk Correction for Gordon and Xia Methods [} 582]
For the Gordon and Xia methods, crosstalk correction is the same.
Acorr = (A – F*AD/FD) / (1 – FA/AA*AD/FD)
Fcorr = (F – D*FD/DD – Acorr*(FA/AA – FD/DD*DA/AA)) / (1 – DA/FA*FD/DD) / G
Dcorr = D + Fcorr*(1 – G*DA/AA) – Acorr*DA/AA
Prerequisite ü You have acquired or opened an image of a sample with a signal of the donor fluorophore
only using the same imaging settings which are later used for imaging the FRET sample.
ü You are in the FRET view.
1. Go to the FRET view options tab and use the controls to define one or more analysis re-
gions in the raw data image.
à The regions are displayed in a table on the FRET tab.
2. Make sure that Object is activated for each region entry in the table.
à The regions are defined as objects that should be analyzed.
3. On the Parameters tab, click Donor.
The donor coefficient values are determined for the defined regions and displayed in the Parame-
ters tab as well as the data table below the images.
Prerequisite ü You have acquired or opened an image of a sample with a signal of the acceptor fluorophore
only using the same imaging settings which are later used for imaging the FRET sample.
ü You are in the FRET view.
1. Go to the FRET view options tab and use the controls to define one or more analysis re-
gions in the raw data image.
à The regions are displayed in a table on the FRET tab.
2. Make sure that Object is activated for each region entry in the table.
à The regions are defined as objects that should be analyzed.
3. On the Parameters tab, click Acceptor.
The acceptor coefficient values are determined for the defined regions and displayed in the Pa-
rameters tab as well as the data table below the images.
In this view you can examine the results of the FRET analysis. The left image is the analyzed FRET
image, the right is the original raw data image. The data table displays the results of the image
analysis and differs whether you use the Sensitized Emission or Acceptor Bleaching method to
measure FRET efficiency. With the view option tabs you can draw analysis regions into the raw
data image and change analysis parameters.
The analysis with the Sensitized Emission methods (all three methods are analyzed) shows the
following parameters in the data table:
Parameter Description
Region Displays the identification number of regions assigned for analysis.
The region zero refers to the whole image.
D avg. Displays the average intensity of the region in the donor channel.
These values are influenced by the selected settings in the Settings
tab. They can vary from the values of the same regions in the Histo
view.
A avg. Displays the average intensity of the region in the acceptor channel.
These values are influenced by the selected settings in the Settings
tab. They can vary from the values of the same regions in the Histo
view.
Parameter Description
F avg. Displays the average intensity of the region in the FRET channel. These
values are influenced by the selected settings in the Settings tab.
They can vary from the values of the same regions in the Histo view.
FRETN (p) Displays the result of the analysis with the Gordon method. The
FRETN value is calculated for each pixel, afterwards the average for
the selected region is determined.
Fc (p) Displays the result of the analysis with the Youvan method. The Fc
value is calculated for each pixel, afterwards the average for the se-
lected region is determined.
N-FRET (p) Displays the result of the analysis with the Xia method. The N-FRET
value is calculated for each pixel, afterwards the average for the se-
lected region is determined.
The analysis with the Acceptor Bleaching method shows the following parameters in the data
table:
Parameter Description
Region Displays the identification number of regions assigned for analysis.
The region zero refers to the whole image.
FRET(p) Eff. % Displays the FRET efficiency for each pixel. It is calculated and aver-
aged for all pixels of the region. The averaged intensities of the region
are used to calculate an averaged FRET efficiency for this region.
Delta D/D Post * 100.
D Pre Displays the average donor intensity of the region in the pre-bleach
image.
D Post Displays the average donor intensity of the region in the post-bleach
image.
A Pre Displays the average acceptor intensity of the region in the pre-bleach
image.
A Post Displays the average acceptor intensity of the region in the post-
bleach image.
Delta D Displays the change in donor intensity of the region before and after
the bleach event. Delta D = D Post-D Pre.
Delta A Displays the change in acceptor intensity of the region before and af-
ter the bleach event. Delta A = A Post- A Pre.
Parameter Description
Toolbar Offers tools to draw regions of interest into the image. These regions
are then used for analysis.
Parameter Description
– Activates the selection mode that enables you to select the graphic el-
ements in the image area.
Select
– Activates the cloning mode that you can use to create an identical
copy of the last graphic element drawn in by simply clicking anywhere
Clone
into the image area. To exit this mode, either switch back to the se-
lection mode or press the ESC key.
– Enables you to draw a rectangular region of interest (ROI) into the im-
age.
Draw Rectan-
gle
– Enables you to draw a spline contour. You can either define the cor-
ner points by a series of clicks or you can trace a contour by keeping
Draw Spline
the left mouse key pressed. Close this contour by right-clicking. Cor-
Contour
ners are always rounded with this tool.
Parameter Description
Method Selects the analysis method. The FRET image is updated depending
on the selected method.
For Sensitized Emission, ZEN provides three different methods for
data analysis. The data table displays the results of all three methods.
Using the Acceptor Bleaching method is only possible with an input
of a dual channel time series with a bleach event.
ROI Table Displays the list of regions that are drawn into the raw data image.
See also
2 Analysis Methods for Sensitized Emission [} 581]
2 Analysis Method for Acceptor Bleaching [} 581]
Parameter Description
Donor Ch Defines the channel which contains the Donor signal.
Parameter Description
Activated: Applies the current coefficient values to all open image
documents in FRET View.
See also
2 Determining Donor Coefficient Values [} 582]
2 Determining Acceptor Coefficient Values [} 583]
On this tab, you can manually set the threshold for image analysis using the slider or input fields.
Alternatively, the threshold can be set by defining and enabling a background region in the FRET
tab.
The values for the thresholds are either displayed as grey value levels (Raw Data) or Normalized
data, which is normalized to the value 1.
If you have activated Subtract Background Using Thresholds on the Settings tab, the back-
ground values are subtracted before the image analysis is performed.
Parameter Description
Donor Defines the threshold for the analysis of the donor signal.
Acceptor Defines the threshold for the analysis of the acceptor signal.
Clear Sets all threshold values to zero and deactivates the background re-
gion.
The tab allows you to select parameters that are used for image analysis. Pixels where the grey
value is zero are not included in any analysis.
Parameter Description
Subtract Back- Activated: The previously defined grey values of the thresholds
ground Using (Thresholds tab) are subtracted from each pixel prior to analysis.
Thresholds
Deactivated: Pixels below the defined thresholds are displayed in the
FRET image and are not part of the analysis (value and area).
Exclude Saturated Activated: Pixels which are saturated in at least one channel are not
Pixels considered for analysis.
Include Threshold Activated: Pixels with the same grey value as the defined thresholds
Pixels are part of analysis.
Show Palette Activated: Displays a palette of the FRET values within the FRET im-
age and a palette of the channel colors in the raw data image.
Parameter Description
Do Not Show Neg- Activated: Does not display any negative values in the FRET image.
ative Values in
FRET Image
Parameter Description
FRETN Truncation Activated: Includes only the range of the selected values in result ta-
ble and the displayed image.
N-FRET Truncation Activated: Includes only the range of the selected values in result ta-
ble and the displayed image.
Do Not Show Neg- Activated: Does not display any negative values in the FRET image
ative Values in Fc showing the analysis according to the method of Youvan.
Image
FRETN Normaliza- Activated: Does not display any negative values in the FRET image
tion showing the analysis according to the method of Youvan.
6.14 Deconvolution
This module offers 3D deconvolution algorithms to enhance your 3D image stacks and methods
for theoretical PSF. It uses efficient processing and significant performance gains via GPU accelera-
tion with dedicated CUDA-compatible graphic cards, including support for multi-GPU. It also of-
fers improvements in resolution down to 120 nm (depending on the imaging system) and is com-
patible with conventional widefield, Apotome, Lightsheet, confocal or multiphoton microscopes.
Additionally to four primary methods, more than 15 published methods (e.g., Richardson-Lucy)
can be employed by changing the parameters. Some functionality is generally available, for the
full set of features you need the dedicated license for the module, see Licensing and Functionali-
ties of Deconvolution [} 589].
Some basic functionality for deconvolution is generally available in the software, but the full De-
convolution functionality requires a license.
Basic functionality
The functionality generally available in ZEN includes:
§ The 2D background removal function Deblurring which is based on the nearest neighbor al-
gorithm.
§ The Deconvolution (defaults) function, which offers three primary deconvolution methods
that are automatically adapted to the type of instrument used to acquire the image.
Licensed Functionality
If you have licensed the Deconvolution module and activated it in Tools > Modules Manager,
the additional functionality includes:
§ The additional and more advanced deconvolution method Constrained Iterative in the De-
convolution (defaults) function.
§ GPU acceleration with dedicated CUDA-compatible graphic cards, including support for
multi-GPU.
§ The Deconvolution (adjustable) function, which offers access to all available function pa-
rameters and provides the necessary flexibility for demanding samples and sample conditions.
§ The creation of a PSF with the PSF Wizard, a function offering a wizard to guide you
through a series of steps to create a PSF from a z-stack image of multiple fluorescent beads.
This function is recommended to create experimentally measured PSFs.
Microscopy creates images of objects which should represent the nature of the object as well as
possible. Fluorescent light, which emanates from the object, passes through the various optical el-
ements of the beampath and eventually gets collected by the detector. Unfortunately, on the way
to the detector the signal is changed in such a way, that the quality of the resulting image suffers.
As a consequence, the image is never a 100% correct representation of the object. This effect is
strongest seen in a classical widefield fluorescence microscopy which does in fact not offer any
optical sectioning capability, but also exists to a different degree in optical sectioning microscope
systems, e.g. Confocal, Lightsheet or ApoTome.
Fortunately, the dominant function which has this deleterious effect on the image, is based on the
optical design principles of the light microscope and therefore well understood. We call this the
point spread function (PSF) of the microscope system.
Deconvolution is a mathematical method which can reverse the effect of the PSF on the image
and can therefore to a large extent restore the image to better represent the object. In the case of
widefield imaging, Deconvolution can even convey optical sectioning properties to the result im-
age allowing true three-dimensional restoration.
The following improvements can be obtained by using deconvolution:
§ Denoising
Regularized In- For zero order g-dif- Algorithm (default): Uses difference of
verse ference: observation and esti-
Regularized In-
mate as regulariza-
also known as: Schaefer et al. verse Filter
tion term.
(2001)
Linear Least Squares Advanced settings >
Regularization:
Zero order
Regularized In- For first order regu- Algorithm: Uses Good’s rough-
verse larization, or Good’s ness first derivative
Regularized In-
roughness: of estimate as regu-
also known as: verse
larization term.
Verveer et al. (1997)
Linear Least Squares Advanced settings >
Regularization:
First order
Second order
Bibliography
Schaefer, L.H., Schuster, D. & Herz, H. Generalized accelerated maximum likelihood based image
restoration approach applied to three-dimensional fluorescence microscopy, Journal of Mi-
croscopy, 2004 (2001), Pt. 2, 99-107 (PubMed).
Verveer, P.J. & Jovin, T.M. (1997) Efficient superresolution restoration algorithms using maximum
a posteriori estimations with application to fluorescence microscopy. J. Opt. Soc. Am. A, 14,
1696-1706.
Meinel E.S., Origins of linear and nonlinear recursive restoration algorithms, J. Opt. Soc. Am. A.,
1986, 3 (6): 787-799.
Biggs, D.S.C. 1998. Accelerated Iterative Blind Deconvolution. Ph.D. Thesis, University of Auck-
land, New Zealand.
Schaefer, L.H. & Schuster, D. Structured illumination microscopy: improved spatial resolution us-
ing regularized inverse filtering, Proceedings of the FOM 2006, Perth, Australia.
Lucy L.B., An iterative technique for the rectification of observed distributions, Astron. J., 1974,
79: 745-754.
Richardson W.H., Bayesian-based iterative method of image restoration, J. Opt. Soc. Am., 1972,
62 (6): 55-59.
van der Voort, H. T. M. and Strasters, K. C. (1995) Restoration of confocal images for quantitative
image analysis. J. Microsc., 178, 165–181.
Tikhonov, A.N. & Arsenin, V.Y. (1977) Solutions of Ill Posed Problems. Wiley, New York.
Successful deconvolution depends mainly on good image quality, knowledge about the optical
parameters of the sample and detailed knowledge about the type of instrument used for image
acquisition. While information about the used instrument type can be easily extracted from the
image metadata, optical parameters of the sample might not be known and the image quality can
vary widely. Many parameters are available for deconvolution which allow you to make correc-
tions to the image quality and adjust the algorithms to match the various optical conditions such
as coverslip type or the medium, in which the sample is embedded. This wide range of parameters
can be overwhelming.
With the Deconvolution (defaults) method, good initial results are achieved by using a carefully
preselected set of default parameters. The parameters are automatically adapted to the following
instrument types: widefield, confocal, lightsheet and ApoTome.
While these parameters usually give nice results, there are cases where further parameter changes
are necessary, e.g., activating and using spherical aberration correction. In such cases, you should
use the Deconvolution (adjustable) method, see Performing Configurable Deconvolution
[} 595].
à The image is displayed in the image container and is as the input image for processing.
7. Click Apply on the top of the Processing tab.
Deconvolution is performed. A new image file is generated and opened automatically in the cen-
ter screen area after processing. If you are satisfied with the result, save the processed image. Re-
peat deconvolution using the other default values to obtain different results. If you have expert
knowledge, you can configure all the deconvolution settings yourself using the Deconvolution
(adjustable) method.
These instructions explain how to deconvolve a z-stack image correctly step by step.
In the described example, the best method Constrained Iterative and a theoretical PSF are
used.
Preparation
To follow these instructions you will need a z-stack image of your sample. You have opened the
software and no images are loaded.
Prerequisites
§ You have licensed the Deconvolution module and activated it in Tools > Modules Man-
ager.
§ You are on the Processing tab.
§ You have acquired or opened a fluorescence image on which you wish to perform deconvolu-
tion.
§ All tools are in Show All mode.
Steps
§ Step 1: Load input image
This section describes how to load an input image in Deconvolution (Configurable).
§ Step 2: Set parameters
This section describes how to set the parameters.
§ Step 3: Process image
This section describes how to process the image and compare it with the input image.
§ Step 4: Reuse deconvolution parameters
This section describes how to reuse the deconvolution parameters from an already processed
image.
In this step you will select the image to be processed and load it as an input image for deconvolu-
tion.
Info
If a warning appears at this point, it is likely that parameters required for deconvolution are
missing from the image. You can subsequently enter or change these values in the Parame-
ters tool > PSF settings tab > Microscope parameters.
In this step you select the desired algorithm and the associated method parameters.
Prerequisite ü On the Processing tab, you have opened the Parameters tool in the Show All mode. You
can usually leave these parameters alone as they are automatically set to give you a good re-
sult.
1. On the Deconvolution tab, select the desired algorithm. In our example we use the Con-
strained Iterative algorithm, which is the most complex algorithm, but usually the best
one to use.
à Additional parameters for the Constrained Iterative method are displayed on the tab.
Info
The processing of large y-stack images or long time series can take some time. During process-
ing we recommend that you do not perform any other complex actions in ZEN or in other pro-
grams on the computer, to avoid increasing the processing time unnecessarily.
In this step you perform deconvolution. You can then compare the resulting image to the input
image and details relating to the processing procedure are visible.
à The progress is displayed in the Convergence History graph, in which the gradual im-
provement is plotted against the number of iterations.
5. The currently active image is loaded automatically into the multi-image. Now drag the input
image from the document gallery in the Right Tool Area into the split document.
6. On the Display tab, change and adjust the display as desired, e.g. Best Fit with 0.01 and
for black and white value with a Gamma of 0.8.
à Both images are adjusted simultaneously.
7. If on the Split Display tab Synchronize Dimensions is activated, you can now zoom syn-
chronously into the images (mouse wheel) and, with the mouse wheel held down, move
the image content as desired to focus on the regions of interest.
8. To create an image of the desired view click on New Image from > Current View > Cre-
ate.
à This creates a new output image document with both images shown side by side.
You have successfully performed deconvolution, observed the processing procedure, and created
an output image to compare the deconvolved with the input image.
6.14.5.4 Step 4: Info View and Re-Using Deconvolution Parameters from a Processed Image
The Info View contains a section Deconvolution Information, which shows a summary of the
parameters used for deconvolution of the image. It also contains the Convergence History
graph displaying the time it took to get this image processed.
If you like to use the same settings in order to process another image, the following steps show
how to do this:
3. Click Clipboard.
à All deconvolution parameters are copied to the clipboard.
4. Open the new image you want to deconvolve and select it as input in the Deconvolution
(adjustable) function. Right-click and select Paste to paste the parameters into Deconvolu-
tion.
5. Click Apply to run Deconvolution with the identical function parameters as used for the
previous image.
You can create settings for Deconvolution which can be saved, exported, and imported.
1. Open the Processing tab and select the method Deconvolution (adjustable).
2. In the Parameters window, activate Show All (if it is not already activated).
3. In the Input window, select the desired image for the Deconvolution.
Note: If you use the settings for Direct Processing, use a test image acquired with the iden-
tical experiment settings you will be using when running the experiment with Direct Pro-
cessing.
4. Click on the context menu button and select New from the drop-down list.
5. Enter a name for your settings and press Enter on your keyboard or click on the save but-
ton .
6. Configure your settings in the Deconvolution or PSF settings tab, see also Deconvolu-
tion Tab [} 95] and PSF Settings Tab [} 99].
Info
GPU
The setting also saves the status of the GPU. If you create the setting on a machine without a
GPU, export it, and import the setting on machine with GPU, the GPU will not be used. There-
fore, the processing can be considerably slow. In this case, we recommend creating the setting
directly on the machine where the processing is executed.
Exception for Direct Processing:
If you set up your experiment and your setting on an acquisition PC without a GPU, the pro-
cessing PC will ignore the status and use the GPU (if available).
The PSF Wizard combines two steps which are necessary for extracting experimental point spread
functions (PSF) from Z-Stacks of subresolution fluorescent beads:
§ A bead averaging step finds individual beads, presents them for inspection, allows you to se-
lect the ones you like and then creates an averaged combination of all selected beads. This
stack shows a single bead which is, as a consequence of the averaging function, fairly free of
noise.
§ The averaged bead stack is then run through the Create PSF function which removes back-
ground and residual noise, correctly scales the PSF and also converts the stack into a 32-bit
floating point format which is better suitable for the mathematical procedures used in decon-
volution.
Prerequisite ü You have acquired a z-stack image. For more information, see Measuring the PSF Using Sub-
resolution Beads [} 603].
ü You have the license for the Deconvolution module.
ü The use of the PSF wizard is activated.
1. On the Processing tab, select the function PSF Wizard.
à Use Wizard and Bad Pixel Correction are activated by default.
2. On the top of the Processing tab, click Apply.
à The PSF wizard opens and guides you through the creation of the PSF.
à If the Use wizard checkbox deactivated, the function displays the parameters for Bead
Averaging. Note that these parameters are only available in Show all mode. We recom-
mend using the PSF wizard.
à The result of the wizard is a PSF file which you can use for deconvolution of images ac-
quired under the same conditions.
This method determines the position of fluorescent beads in a z-stack image. If these beads are
too close to one another, they are excluded from the calculation. Beads which are far enough
apart from one another are combined into a single bead, from which it is then possible to calcu-
late a PSF using the Create PSF function.
Preparation
1. The surface of coverslips is hydrophobic which means, liquid droplets do not spread out
easily and beads tend to aggregate at the edges. For PSF measurements you want individu-
ally spread out beads.
2. Bath the coverslips for 10 minutes in 100% ethanol.
3. Use forceps to remove the coverslip. Shake off excess liquid and run through bunsen burner
flame.
à This makes the surface slightly hydrophilic which means, the droplets and beads spread
out easier.
à Ideally, use ZEISS coverslips with a defined thickness of 170 μm. However, coverslips and
mounting media for bead measurements must be identical to the ones used for the sam-
ple, the image of which shall be deconvolved. Beads should have a diameter below the
resolution limit of the objective, e.g. 0.175 µm. Smaller diameters are better, but smaller
beads are dimmer and can therefore be difficult to locate on the cover slip.
à Tetraspeck beads from Thermo Fisher Scientific have the advantage of covering four col-
ors which are frequently used in imaging research, but some batches can show rapid loss
of fluorescence. Single color beads are typically brighter.
4. Break up agglomerates by sonicating stock suspension in a waterbath for 20 minutes.
Stocks suspensions are way too dense, so dilute 1:100 with 70% ethanol.
5. Create further dilutions of 1:1.000 and 1:10.000 by adding 100 µl to 900µl 70% ethanol.
Mix well using a Vortex mixer.
6. Put one 5 µl drop for each dilution on a cover slip using a 20 µl Eppendorf pipette
7. Let dry. This should take less than 5 minutes. You can speed it up by putting the cover slip
on a warm surface.
8. Put 10-20 µl mounting medium on the coverslip. For aqueous mounting media seal edges
of coverslip with valap (1:1:1 mixture of vaseline, lanolin, paraffin), nailpolish or paraffin.
In the next step, you acquire an image.
Imaging
1. Locate the beads on the microscope (e.g. use a 20 x lens first, then move to 63x oil). Start
at the 1:100 spot which should have tons of beads and be easy to find.
2. When found, move to a sparser spot and try to find an area with a couple of single beads
in the FOV. This can be tricky, but usually you will be able to find good areas if you keep
looking around.
3. Acquire a Z-stack of a suitable area, observing the following rules.
à No saturation. This can be tricky with LSM’s. Use 12-bit mode at least.
à Only fill the dynamic range in the histogram to about 80%.
à Make sure to focus up and down when setting up the exposure times to measure the ex-
posure suitable for the bright bead center to avoid the risk of saturation.
4. Set up the z-stack as follows: in the Z-stack tool, click on the Optimal button.
à This sets the distance according to Nyquist. For bead measurments further reduce the
slice distance to about half of what Optimal suggests.
5. Also, define the top and bottom of the stack in such a way that the airy disc of the beads
cannot be distinguished any longer.
6. When ready, save and name the image properly.
Look at the result in OrthoView: Do you see spherical aberrations? Are the beads symmetrical?
Are there enough individual beads in the stack? Is the background low enough?
Prerequisite ü If you are using Direct Processing on different computers, you have connected acquisition and
processing computer, see Connecting Acquisition Computer and Processing Computer
[} 222].
ü To ensure that the processing computer reads incoming files and starts the processing, you
have clicked Start Receiving in the Direct Processing tool on the Applications tab. This is
usually active by default.
ü On the Acquisition tab, Direct Processing is activated. This activates the Auto Save tool as
well.
ü Depending on your settings, you have defined the folder where the acquired images are
stored in the Direct Processing or the Auto Save tool. Use a folder to which the processing
computer has access. For information about sharing a folder, see Sharing a Folder for Direct
Processing [} 238].
ü On the Acquisition tab, you have set up your experiment for image acquisition.
ü If you want to use advanced settings created with the Apotome Plus (adjustable) function,
you have the settings available, see also Creating Deconvolution Settings [} 601].
1. On the Acquisition tab, open the Direct Processing tool.
à If no Direct Processing settings were made before for the current experiment, a particular
processing function is already preselected depending on your microscope, channel set-
tings and licenses.
2. From the Processing Function dropdown list, select Deconvolution.
3. Select a deconvolution method. We recommend Excellent, slow (Constraint Iterative). If
you want to use an advanced settings file you have created with the function Deconvolu-
tion (adjustable), activate Use Advanced Settings.
à A dropdown list is displayed under Load Setting created in the Deconvolution func-
tion.
4. In the drop-down list, select your advanced settings for Deconvolution.
Note: Currently Direct Processing supports settings configured in the Deconvolution tab
of the image processing function Deconvolution (adjustable) and some parameters of
the PSF Settings tab. Especially parameters that rely on external data (like using external
PSF) are not possible with Direct Processing.
5. Set up the experiment. For optimal processing efficiency, select the Full Z-Stack per chan-
nel option. This way, the processing can start as soon as a channel-z-stack has been com-
pleted. Keep in mind that for Colocalization studies better results might be achieved when
the default All Channels per Slice is used, depending specific application and filter config-
uration.
6. Click Start Experiment to run the experiment. Note: You can pause the processing. If you
stop the experiment, requests that have been sent earlier by the acquisition computer are
not processed. However, already processed images will be retained.
à The images are stored in the folder you have defined in the Auto Save or Direct Pro-
cessing tool. When you abort the acquisition, the remote processing will not take place.
In case you have set up several processing functions, only the acquired image and the fi-
nal output image are stored.
à The processing computer reads incoming files and starts the processing. The path to the
selected folder, the currently processed image as well as the images to be processed are
displayed in the Direct Processing tool. The processed image is saved to the same
folder specified in the Direct Processing tool. If the image name already exists in this
folder, the new file is saved under a new name <oldName>-02.czi.
7. To cancel the processing on the processing computer, on the Applications tab, in the Di-
rect Processing tool, click Cancel Processing.
Once processing is finished, you are notified on the acquisition PC and can open and view the ac-
quired image as well as the processed image. This should be done on the processing computer, so
that you can immediately start a new experiment on the acquisition computer. However, you can
also automatically open the processed image on the acquisition PC with the respective setting in
the Direct Processing tool on the Acquisition tab.
When you open the image, in the Image View, on the Info view tab, information about the exe-
cuted deconvolution is available. When Deconvolution is done through Direct Processing, the info
about Deconvolution parameters shows the suffix online and the Convergence History graph.
Additionally, general information about Direct Processing (e.g. the duration) is also available on
the Info view tab of the processed image.
See also
2 Auto Save Tool [} 890]
2 Sharing a Folder for Direct Processing [} 238]
2 Direct Processing Tool on Acquisition Tab [} 233]
2 Connecting Acquisition Computer and Processing Computer [} 222]
Most types of microscope images could in principle be deconvolved. However, there are practical
limitations, for example the image file sizes might be too large or imaging conditions might be
dominated by effects other than blurring by the point spread function. If, for example, a sample
has strong light scattering properties or if light is strongly absorbed by the sample, deconvolution
becomes difficult or impossible.
Deconvolution works both in 2D as in 3D. The PSF is very small in 2D, so the improvements of de-
convolving 2D images are usually not very significant. Its full power deconvolution can show
when processing 3D image stacks which have been acquired according to the following general
rules:
§ Acquisition of images with enough pixel resolution by choosing objectives with numerical
apertures >0.5 and using camera resolutions with small enough pixel sizes as recommended
by the Nyquist criterion.
§ Acquisition of Z-stacks with distance between individual planes not larger than recommended
by the Nyquist criterion (2-fold oversampling of the theoretically resolvable information, Opti-
mal button in the Z-stack tool).
§ Acquisition of enough planes above and below the structure of interest. As a rule, acquiring
about half the axial PSF size above and below is enough to also get restoration of the struc-
tures at the top and bottom of the structure of interest.
§ Avoiding saturation of the detector.
§ Choosing imaging conditions to avoid sample bleaching.
§ Avoiding spherical aberrations by choosing objectives, which use an immersion medium with
a refractive index as close as possible to the mounting medium of the sample (for example us-
ing water immersion objectives for cell cultures in aqueous medium).
§ Choosing sample media with low background fluorescence (for example phenol red free cul-
ture media).
ZEN deconvolution is suitable for images from many different microscope types. The following list
of image types have been tested and are supported by ZEN deconvolution:
The following table lists the parameters which are used by default for widefield, confocal, light-
sheet and ApoTome images.
Apotome Plus is a novel reconstruction algorithm for Apotome 2 and 3. Based on an iterative joint
SIM reconstruction, it increases SNR, lateral and axial resolution compared to predecessor algo-
rithms.
Apotome Plus processes 3D z-stacks and runs on ZEN 3.11 and higher. Note that a separate
workstation running ZEN desk is recommended for processing during acquisition, as the worksta-
tion requirements are significantly higher than for previous methods. Specific guidelines and rec-
ommended configurations can be found in the price list.
This method allows you to select two different algorithms for Apotome Plus, without any further
settings.
Parameter Description
Good, Medium Uses an algorithm based on Deconvolution methods for structured il-
Speed (Joint Fast lumination microscopy, with some enhancements as described in the
Iterative) technical note. It is faster and less memory intensive, and also purely
using the image formation model.
Excellent, Slow Uses an algorithm based on the Generalized approach for accelerated
(Joint Constrained maximum likelihood based image restoration applied to three-dimen-
Iterative) sional fluorescence microscopy, but modified to allow for joint recon-
struction and with enhancements as described in the technical note. It
offers increased robustness to noise and mismatch between the theo-
retical and real PSF.
This method allows you to use and individually configure two different algorithms for Apotome
Plus. Two tabs are available for detailed configuration:
§ On the Apotome Plus tab, you can select the desired algorithm and define the precise set-
tings for it, see Apotome Plus Tab [} 609].
§ On the PSF Settings tab, you can see and change all key parameters for generating a theo-
retically calculated PSF, see PSF Settings Tab [} 613].
Parameter Description
Algorithm Selects the Apotome deconvolution algorithm.
– Fast Iterative Uses an algorithm based on Deconvolution methods for structured il-
(Joint) lumination microscopy, with some enhancements as described in the
technical note. It is faster and less memory intensive, and also purely
using the image formation model.
– Constraint It- Uses an algorithm based on the Generalized approach for accelerated
erative (Joint) maximum likelihood based image restoration applied to three-dimen-
sional fluorescence microscopy, but modified to allow for joint recon-
struction and with enhancements as described in the technical note. It
offers increased robustness to noise and mismatch between the theo-
retical and real PSF.
Also, it offers more options (likelihood poisson and gaussian, and reg-
ularization), allowing choosing an algorithm optimized for specific im-
age types, e.g. sparse and dense images.
Enable Channel Se- Not possible in combination with Maximum Iterations and Quality
lection Threshold.
Activated: Applies the settings on a channel specific basis. This al-
lows you to set parameters for each channel individually. A separate,
colored tab for each of the channels is displayed.
Deactivated: Applies the same settings to all channels of a multi-
channel image.
Parameter Description
2x Upsampling Activated: Allows you to extract additional information enabled by
the SIM principle. This modality splits one pixel into four, which allows
the algorithm to work on a finer grid and reconstruct with higher res-
olution. Note that this will largely increase the computation times and
requires a large CUDA-based GPU.
Normalization Specifies how the data of the resulting image is handled if the gray/
color levels exceed or fall short of the value range.
– Clip Clips the values that exceed or fall short of the value range. Sets neg-
ative values to 0 (black). If the values exceed the maximum possible
gray value of 65636 when the calculation is performed, they are lim-
ited to 65636 (pixel is 100% white).
Results from different input images can be quantitatively compared
with each other.
– Automatic Normalizes the output image automatically. In this case the lowest
value is 0 and the highest value is the maximum possible gray value in
the image (gray value of 65636). The maximum available gray value
range is always utilized fully in the resulting image.
Results from different input images cannot directly be compared
quantitatively with each other.
Set Strength Man- Only available for Constrained Iterative and if for Regularization is
ually at least Zero Order selected.
Activated: Sets the desired degree of restoration with the slider. To
achieve strong restoration and best contrast, move the slider towards
Strong. To achieve lower restoration but smoother results, move the
slider towards Weak. If the setting is too strong, image noise may be
intensified and other artifacts, such as "ringing", may appear.
Deactivated: Determines the restoration strength for optimum image
quality automatically. This is recommended for widefield images and
is therefore deactivated by default.
The restoration strength is inversely proportional to the strength of
so-called regularization. This is determined automatically with the
help of Generalized Cross Validation (GCV).
Parameter Description
Corrections
To display parameters for image correction, click .
– Background Activated: Analyzes the background component in the image and re-
moves it before the deconvolution calculation. This can prevent back-
ground noise being intensified during deconvolution.
– Bad Pixel Cor- Activated: Employs a fully automatic detection and removal of spuri-
rection ous or hot pixels (also known as stuck pixels) in an image stack which
might interfere with the deconvolution result.
It is based on the analysis of the gray level variance in the neighbor-
hood of each pixel in the image. It is recommended to use this param-
eter only, if stuck pixels are observed in the input image.
– SIM Correc- Activated: Removes stripe artefacts created by image acquisition and
tion corrects for false phases in metadata.
Parameter Description
Likelihood Visible for Fast Iterative and Constrained Iterative algorithms.
Selects which likelihood calculation you want to work with.
Parameter Description
– First Order Regularization based on Good's roughness. Under certain circum-
stances, more details are extracted from noisy data. It may be better
suited to the processing of confocal data sets.
First Estimate Visible for Fast Iterative and Constrained Iterative algorithms.
– Input Image The input image is used as the first estimate of the target structure
(default).
– Last Result The result of the last calculation is used to estimate the next calcula-
Image tion. This can speed up a calculation that is repeated using slightly dif-
ferent parameters.
– Mean of Input No estimate is made, the mean gray level of the input image is being
used. This is the most rigid application of deconvolution. It should be
chosen for confocal images, where the data sampling can be quite
sparse. The computation time will increase, but missing information
can be recovered from the PSF.
Maximum Itera- Visible for Fast Iterative and Constrained Iterative algorithms.
tions Sets the maximum permitted number of desired iterations. In the case
of Richardson-Lucy, you should allow significantly more iterations
here.
Quality Threshold Only visible for the Fast Iterative and Constrained Iterative algo-
rithms.
Defines the quality level at which you want the calculation to be
stopped. The percentage describes the difference in enhancement be-
tween the last and next-to-last iteration compared with the greatest
difference since the start of the calculation. 1% is the default value.
Lowering this can bring about small improvements in quality.
Parameter Description
Since Apotome Plus only supports GPU, the following two options cannot be edited:
GPU Acceleration Only visible if a suitable (NVIDIA, CUDA based) graphics card is in-
stalled in your PC. The checkbox is then activated by default.
Activated: Uses GPU processing.
Deactivated: Uses CPU processing.
GPU Tiling Only available for very large images that exceed the available graphic
card memory.
Activated: With this function the image is split up in smaller portions
which fit into the memory of the graphic card. The function automati-
cally determines into how many tiles the image must be split to allow
maximum usage of the graphics card. The resulting tiles are automati-
cally stitched together for the final output result.
Deactivated: No tiling is performed, however, in this case only cer-
tain sub-functions of deconvolution can run on the graphics card and
the speed increase compared to CPU processing will be lower. The
image quality might be higher than with tiling because there is no
need for stitching.
All key parameters for generating a theoretically calculated Point Spread Function (PSF) are dis-
played on this tab.
Info
Usually, images (with file type *.czi) that have been acquired with ZEN automatically contain
all microscope parameters, meaning that you do not have to configure any settings on this
tab. Therefore, most parameters are grayed out in the display. It is possible, however, that as a
result of an incorrect microscope configuration values may not be present or may be incorrect.
You can change them here. The correction of spherical aberration can also be set here.
The most important microscope parameters for PSF generation that are not channel-specific are
displayed in this section.
Info
If you enter incorrect values, this can lead to incorrect calculations. If the values here are obvi-
ously wrong or values are missing, check the configuration of your microscope system.
Parameter Description
NA Objective Displays the numerical aperture of the objective.
Immersion Displays the refractive index of the immersion medium. Note that this
can never be smaller than the numerical aperture of the objective.
You can make a selection from typical immersion media in the drop-
down list next to the input field.
Parameter Description
Scale Axial Displays the geometric scaling in the Z direction.
Override To change the input fields that are normally grayed out, click on the
button. The input fields and drop-down lists are now active. The text
on the button changes to Reset. To restore the original values saved
in the image, click Reset.
Master Reset Resets the metadata to the values which were originally stored in the
image at time of acquisition. It reverts any changes made by clicking
Override.
Parameter Description
Phase Ring If you have acquired a fluorescence image using a phase contrast ob-
jective, the phase ring present in the objective is entered here. This
setting has significant effects on the theoretical Point Spread Func-
tion (PSF).
– Scalar Theory The wave vectors of the light are interpreted as electrical field = inten-
sity and simply added. This method is fast and is sufficient in most
cases (default setting).
– Vectorial The- The wave vectors are added geometrically. However, the calculation
ory takes considerably longer.
Z-Stack This field can only be changed if it was not possible to define this pa-
rameter during acquisition, e.g. because the microscope type was un-
known. It describes the direction in which the z-stack was acquired.
Note that this setting is only relevant, if you are using the spherical
aberration correction.
Parameter Description
Enable Correction Activated: Uses the correction function. All options are active and
can be edited.
Parameter Description
Embedding Select the used embedding medium.
Medium
Refractive Index Displays the refractive index of the selected embedding medium. En-
ter the appropriate refractive index if you are using a different embed-
ding medium.
Distance to Cover Displays the distance of the acquired structure from the side of the
Slip cover slip facing the embedding medium. Half the height of the z-
stack is assumed as the initial value for the distance from the cover
slip. The value can be corrected if this distance is known. If possible,
this distance should be measured.
Note: Use Ortho View and the Distance Measurement option to
define the distance of the sample to the coverslip. It is also important
to estimate the position of the glass/embedding medium interface as
precise as possible. If the z-stack extends into the coverslip, the deter-
mined range of the stack which reaches into the glass should be en-
tered as a negative value. Example: Z-stack is 26 µm thick, glass/
Parameter Description
medium interface is positioned at 9 µm distance from the first plane
of the stack. Resulting value for Distance to cover slip: - 9.0 µm.
Cover Slip Type Commercially available cover slips are divided into different groups
depending on their thickness (0, 1, 1.5 and 2), which you can select
from the dropdown list. Cover slips of the 1.5 type have an average
thickness of 170 µm. In some cases, however, the actual values can
vary greatly depending on the manufacturer. For best results the use
of cover slips with a guaranteed thickness of 170 µm is recom-
mended. Values that deviate from this can be entered directly in the
input field.
Cover Slip Ref. In- Selects the material that the cover slip is made of. The corresponding
dex refractive index is displayed in the input field next to it.
Working Distance Displays the working distance of the objective (i.e. the distance be-
tween the front lens and the side of the cover slip facing the objec-
tive). The working distance of the objective is determined automati-
cally from the objective information, provided that the objective was
selected correctly in the MTB 2011 Configuration program. You can,
however, also enter the value manually.
In this section you find all settings that are channel specific. This means that they can be config-
ured differently for each channel.
Parameter Description
Illumination Displays the excitation wavelength for the channel dye [in nm] by us-
ing the peak value of the emission spectrum. The color field corre-
sponds to the wavelength (as far as possible).
Detection Displays the peak value of the emission wavelength for the channel
dye. The color corresponds to the wavelength (as far as possible).
Parameter Description
Sampling Lateral Depends on the geometric pixel scaling in the X/Y direction and dis-
plays the extent of the oversampling according to the Nyquist crite-
rion. The value should be close to 2 or greater in order to achieve
good results during DCV. As, in the case of widefield microscopes,
this value is generally determined by the objective, the camera
adapter used and the camera itself, it can only be influenced by the
use of an Optovar. With confocal systems, the zoom can be set to
match this criterion.
Sampling Axial Depends on the geometric pixel scaling in the Z direction and displays
the extent of the oversampling according to the Nyquist criterion.
The value should be at least 2 or greater in order to achieve good re-
sults during DCV. This value is determined by the increment of the fo-
cus drive during acquisition of Z-stacks and can therefore be changed
easily.
Displays advanced microscope information that influences the form of the PSF in a channel-de-
pendent way:
Parameter Description
Illumination Selects the illumination method with which the data set has been ac-
quired. In the event that a Conventional Microscope has been en-
tered under the microscope parameters, the following options are
available here: Epifluorescence, Multiphoton Excitation and
Transmitted Light. In the case of confocal microscopes, Epifluores-
cence is the only option.
Image Formation Displays whether the imaging was incoherent (Conventional Micro-
scope) or coherent (Laser Scanning Microscope).
Axial FWHM Displays the FWHM (Full Width Half Maximum) as a measure of the
axial resolution of the PSF.
This section shows you the PSF that is calculated for a channel based on the current settings. If
you select the Auto Update checkbox, all changes made to the PSF parameters are applied im-
mediately to the PSF view. This makes it possible to check quickly whether the settings made
meet your expectations. You can extract the PSF from the image via right-click menu (PSF Snap-
shot) which opens the resulting new PSF document in the center screen area.
Apotome Plus can be used in Direct Processing. For general information, see Direct Processing
[} 219].
Prerequisite ü If you are using Direct Processing on different computers, you have connected acquisition and
processing computer, see Connecting Acquisition Computer and Processing Computer
[} 222].
ü To ensure that the processing computer reads incoming files and starts the processing, you
have clicked Start Receiving in the Direct Processing tool on the Applications tab. This is
usually active by default.
ü On the Acquisition tab, Direct Processing is activated. This activates the Auto Save tool as
well.
ü Depending on your settings, you have defined the folder where the acquired images are
stored in the Direct Processing or the Auto Save tool. Use a folder to which the processing
computer has access. For information about sharing a folder, see Sharing a Folder for Direct
Processing [} 238].
ü On the Acquisition tab, you have set up your experiment for Apotome acquisition.
ü If you want to use advanced settings created with the Apotome Plus (adjustable) function,
you have the settings available, see also Creating Apotome Plus Settings [} 619].
1. On the Acquisition tab, open the Direct Processing tool.
à The tool parameters are displayed. In the processing pipeline, the first block is selected
automatically
2. If the function is not preselected, go to the Processing Function dropdown list and select
Apotome Plus.
à The parameters are displayed.
3. Select the algorithm you want to use. If you want to use an advanced settings file you have
created with the function Apotome Plus (adjustable), activate Use Advanced Settings.
à A dropdown list is displayed under Load Setting created in the Apotome Plus func-
tion.
4. From the dropdown list, select your advanced settings for Apotome Plus.
Note: Currently Direct Processing supports settings configured on the Apotome Plus tab
of the image processing function Apotome Plus (adjustable) and some parameters of the
PSF Settings tab. Especially parameters that rely on external data (like using external PSF)
are not possible with Direct Processing.
5. Click Start Experiment to run the experiment. Note: You can pause the processing. If you
stop the experiment, requests that have been sent earlier by the acquisition computer are
not processed. However, already processed images will be retained.
à The images are stored in the folder you have defined in the Auto Save or Direct Pro-
cessing tool. When you abort the acquisition, the remote processing will not take place.
In case you have set up several processing functions, only the acquired image and the fi-
nal output image are stored.
à The processing computer reads incoming files and starts the processing. The path to the
selected folder, the currently processed image as well as the images to be processed are
displayed in the Direct Processing tool. The processed image is saved to the same
folder specified in the Direct Processing tool. If the image name already exists in this
folder, the new file is saved under a new name <oldName>-02.czi.
6. To cancel the processing on the processing computer, on the Applications tab, in the Di-
rect Processing tool, click Cancel Processing.
Once processing is finished, you are notified on the acquisition PC and can open and view the ac-
quired image as well as the processed image. This should be done on the processing computer, so
that you can immediately start a new experiment on the acquisition computer. However, you can
also automatically open the processed image on the acquisition PC with the respective setting in
the Direct Processing tool on the Acquisition tab.
Information about Direct Processing (e.g. the duration) is available on the Info view tab of the
processed image.
You can create settings for Apotome Plus which can be saved, exported, and imported.
Prerequisite ü You have opened an image based on which you want to define the processing settings.
1. Open the Processing tab and select the method Apotome Plus (adjustable).
à The parameters are displayed in the Parameters window.
2. In the Input window, select the desired image for Apotome Plus.
Note: If you use the settings for Direct Processing, use a test image acquired with the iden-
tical experiment settings you will be using when running the experiment with Direct Pro-
cessing.
3. In the Parameters window, activate Show All (if it is not already activated) and click the
context menu button
à A dropdown list is displayed.
4. Select New.
à The text field gets editable.
5. Enter a name for your settings and press Enter on your keyboard, or click on the save but-
ton .
à You have defined a name for the setting.
6. Configure your settings in the Apotome Plus or PSF settings tab, see also Apotome Plus
Tab [} 609] and PSF Settings Tab [} 613]. Note that if you want to use this setting in Di-
rect Processing, some parameter settings of the PSF Settings tab are not supported/possi-
ble there, especially parameters that rely on external data (like using external PSF).
7. Click and select Save.
You have now created and saved an Apotome Plus setting. You can load this setting into the Di-
rect Processing tool to use it for a Direct Processing experiment.
See also
2 Using Apotome Plus in Direct Processing [} 617]
This module enables you to work with images from multiple sources: zoom in from the full
macroscopic view of your sample down to nanoscale details. The Correlative Workspace (CWS) is
the efficient way to analyze and correlate images from multiple sources. You can manage, cor-
rect, and align these images in 2D as well as in 3D. It works with images from SEM, FIB-SEM, X-
ray, light microscopes and any optical images, e.g., from your digital camera. Its sample-centric
workspace lets you build a seamless multimodal, multiscale picture of your sample. Use it to guide
further investigations and target additional acquisitions.
The module employs a novel graphical user interface concept that makes it easy to investigate all
your samples. Design a workflow tailored precisely to the complexity of your experiment, no mat-
ter whether it’s a simple task or a compound experiment. A sophisticated workflow environment
guides you all the way from the setup for automated acquisition to post processing and custom-
ized exports, and right on through to analysis.
For working with ZEN Connect projects or images, you might need a separate license. The basic
ZEN Connect functionality is available for all versions. This functionality includes:
§ ZEN Connect correlative workspace, including the display of images with their relations.
§ Manual alignment of captured images.
§ Auto-registration of images using stage coordinates.
§ Image acquisition into the project.
§ Import of images into the correlative workspace.
§ Interactive control of stage movement from the correlative workspace.
Licensed Functionality
If you have the necessary license, additional functionality for 2D and 3D work is available.
Additional 2D functionality:
1 Image View
Area where you interact with images. Here, for example, you can select images and align
them.
ZEN Connect provides its own 3D viewer. For more information on the user interface of
this 3D viewer, see ZEN Connect 3D View [} 621]. You can use the view tabs on the left
side of the Image View to switch between the two viewers.
4 Regions Tool
Displays a list of Regions of Interest which are drawn into a ZEN Connect project. For
more information, see Regions Tool [} 663].
5 View Options
Area for general view options of the Dimensions Tab [} 1029] and specific view options
on the Measurement Tab [} 658] and Alignment Tab [} 656]. The Alignment tab is only
visible when you have entered the mode to align images (see also Activating the Align-
ment Process [} 635]).
The 3D view for ZEN Connect allows you to see and align two 3D volumes from your project. The
viewer and functionality is based on the Tomo3D view, see Tomo3D View [} 545]. For this func-
tionality, you also need the license for the 3D Toolkit.
1 Image Views
Area where you interact with the image views, set the cut lines with the mouse and align
the adjustable image in the 2D views. You can display up to four different views, includ-
ing a 3D and different 2D views. The number of image views can be set in the Ortho
Display tab, the type can be set by the dropdown in the top left corner of each view. For
further information on the functionality, see Tomo3D View [} 545].
3 View Options
Area for general and the Tomo3D specific view options as well as the dedicated Align-
ment and Connect 3D tabs, see Alignment Tab ZEN Connect 3D [} 622] and Connect
3D Tab [} 623].
See also
2 Opening Images in the ZEN Connect 3D View [} 623]
2 Aligning Images in the ZEN Connect 3D View [} 643]
Parameter Description
Mirror
Parameter Description
Pre-Align In case the two volumes are some distance apart, Pre-Align helps to
bring the volumes in close proximity to each other by centering the
adjustable volume on the fixed reference volume.
Recalculate Recalculates the bounding box around the two volumes. A bounding
box is displayed surrounding both images. After an alignment of the
images, the bounding box can be too large (enclosing empty space)
or too small (not surrounding both images anymore) and can be recal-
culated.
Reset Resets the alignment as it was when you started this alignment opera-
tion.
Parameter Description
Image Table Displays information for the volumes currently opened in the 3D view.
– Name Displays the name of the volume. A checkmark in front of the Name
column indicates that this volume is currently selected for alignment.
1. Start the software. For more information, see Starting Software [} 22].
The software opens and ZEN Connect is available.
Note that before working with ZEN Connect, you need to create a ZEN Connect project. For
more information, see Creating a ZEN Connect Project [} 625].
In the ZEN Connect 3D view you can display two z-stacks as 3D volumes and interact with them.
Note that more than two volumes cannot be displayed in this 3D view.
Prerequisite ü You have opened a ZEN Connect project with at least two z-stacks.
1. In the ZEN Connect tool or the Image View, select one or a maximum of two z-stacks
while pressing Ctrl.
2. Right click one of the selected images in the ZEN Connect tool and select Show in 3D. Al-
ternatively, click and select Show in 3D.
3. If you want to open the volumes directly in 3D alignment mode, go to the ZEN Connect
tool and select Align Two images in 3D for the Alignment button. Note: For the align-
ment mode two volumes need to be selected. The image you selected second is then auto-
matically preset for alignment.
The ZEN Connect 3D view opens and displays the selected z-stacks as 3D volumes. The images
displayed in the 3D view are marked with an icon in the tree view of the ZEN Connect tool.
See also
2 Aligning Images in the ZEN Connect 3D View [} 643]
In Zen Connect, it is also possible to import non image data into your project, have a visual repre-
sentation (marker) of it in your image area, and align the position of the data marker with respect
to the images in the project. The data is listed under Non-Image Data in the tree of the ZEN
Connect tool and represented by the marker in the image area. Per default, the marker for
this non image data is toggled invisible. To toggle data visible and invisible, see also Moving or
Hiding Images [} 630].
See also
2 Importing Non Image Data [} 633]
2 Aligning Non Image Data [} 644]
If you have licensed ZEN Connect, the software supports the display of images with the specific
Horiba LabSpec HDF5 Raman image format. These supported images basically represent a map
(score map) of intensities of the different channels. This means the higher the concentration of a
detected material, the higher the displayed color saturation. To display the data the image codec
HDF5 is used.
Info
HDF5 is only used as a standard image codec to display this specific type of image (Horiba
LabSpec HDF5 Raman image format) and is not a HDF5 import!
Within a ZEN Connect project, in the ZEN Connect tree view, you manage your data in a project
structure tree combined with the viewer. Before acquiring or importing any images, you need to
create the ZEN Connect project. Only within a ZEN Connect project, you can use all ZEN Con-
nect functionality.
You can open only one ZEN Connect project at a time.
à In the ZEN Connect tool, the empty ZEN Connect project is displayed. Here, the struc-
ture of the ZEN Connect project will be displayed as soon as you acquire or import im-
ages. In the folder on your computer, the ZEN Connect project file <CWS project
name>.a5proj is generated. A <ZEN Connect project name>.a5lock is gen-
erated to prevent more than one user to work on the project at the same time. It is gen-
erated any time you load a ZEN Connect project.
à At the bottom of the Image View, a scale bar with size, the width of the field of view
(FOV), and scaling is displayed.
à You have created a ZEN Connect project.
5. Acquire an image.
à In the Project View, a new session node is created, and each acquisition is displayed.
à In the Image View, all images are displayed. They are signed with a colored frame (blue:
normal image, red: selected image).
6. When you close the project or the software, you are prompted to save the project file.
For information on setting up holders and carriers, see Selecting and Clearing Carrier/Holder
[} 647].
For information on the ZEN Connect tool, see ZEN Connect Tool [} 650].
You can load any of your ZEN Connect projects to continue with your work. You can also load
existing ZEISS Atlas 5 projects.
Info
Open first the ZEN Connect project before performing a S&F calibration.
Prerequisite ü You have created a ZEN Connect project, or a ZEISS Atlas 5 project is available. ZEISS Atlas 5
projects belong to the ZEISS ATLAS 5 software. ZEN Connect supports these formats.
1. Select File > Open, navigate to the ZEN Connect project and open it.
In the ZEN Connect Project View, the current state of the Connect project is displayed. In the
Image View, the sample holders are marked, and previously acquired images are displayed. The
current stage position is marked with a cross hair. If you want to acquire additional images to the
project, align the new session with the existing data.
See also
2 Opening or Deleting a ZEN Connect Project from ZEN Data Storage [} 273]
You can import simple images, such as camera images or more complex images, such as a light
microscope image with overlays, into your ZEN Connect project.
You can use an imported image as a backdrop to navigate the region. You can correlate imported
images with sample holder marks, e.g., fiducials or other images through the alignment process.
The imported image is displayed according to its position in the Layers View along with any
other image in the project.
The metadata of .CZI-images are read natively and are also imported.
See also
2 Adding an Image to the ZEN Connect Project [} 626]
Zooming to
Zooming to 100%
You can remove images from your ZEN Connect project. Images you acquired and added to the
project, you can also delete permanently from the disk.
See also
2 ZEN Connect Tool [} 650]
Info
ZEN Data Storage
If you have opened a project from ZEN Data Storage, you can also open an individual image
this way. The image is then downloaded and each change is not updated in the viewer and
the project until you upload the image again. For information on uploading an image to the
data storage, see Saving an Image to ZEN Data Storage [} 270].
1. In the Project View or in the Layers View, right-click an image and select Show in Ex-
plorer. Alternatively, click on the Context menu button and select Show in Ex-
plorer.
The explorer opens and the folder with the selected image is displayed.
In the Layers View, you can move images over and under other images, or hide them com-
pletely. In the Project View, you can hide images.
Prerequisite ü You have loaded a ZEN Connect project with at least two images.
1. To change the image order, in the Layers View, move the image by dragging it up or
down.
à In the Image View, the changed order is immediately visible.
2. To hide the image from the ZEN Connect project, in the Layers View, activate or deacti-
vate the image by clicking the Eye icon on the right of the image name. Alterna-
tively, in the Project View, right-click Show/Hide or click on the Context menu button
and select Show/Hide.
The results of your changes are displayed immediately in the Image View.
See also
2 ZEN Connect Tool [} 650]
To organize your work, you can start a new session within your ZEN Connect project any time.
1. In the ZEN Connect tool, click on the Context menu button and select New ses-
sion.
Alternatively, click on the New Session button.
A new session is activated. As soon as you acquire a new image, a new session node is created,
and the new image will be subordinated.
See also
2 ZEN Connect Tool [} 650]
In ZEN Connect you can switch between two different view modes for your projects. The default
is the carrier/ holder view mode, where the coordinate system of the correlative workspace is
aligned with the screen and images of the current system/ session might be rotated. The second is
the stage centric view mode, where the coordinate system of the current session is aligned with
the screen and the carrier/ sample holder as well as other sessions might be rotated.
You can import simple images, such as camera images or more complex images, such as a light
microscope image with overlays, to your ZEN Connect project. For more information, see
Adding an Image to the ZEN Connect Project [} 626].
Alternatively, you have the option to import BioFormats into your ZEN Connect project. For more
information, see Importing Third-party Images [} 631].
Note: If you import an Airyscan image, ZEN Connect displays only the raw data and not the cal-
culated Airyscan. Such images should be processed before you add them to ZEN Connect. If you
want to add an unprocessed Airyscan image, a warning will appear asking if you want to con-
tinue.
See also
2 ZEN Connect Tool [} 650]
ZEN uses BioFormats as an integrated library for reading and writing life sciences image file for-
mats. It is capable of parsing both pixels and metadata for a large number of formats. It achieves
this by converting proprietary microscopy data into an open standard called the OME data model.
With BioFormats, you can read proprietary formats and convert them into an intermediate format,
e.g., CZI or OME-TIFF. For example, it is possible to load simple images you can import simple im-
ages, such as camera images or more complex images, such as a light microscope image with
overlays, to your ZEN Connect project.
With ZEN Connect you can import SmartFIB stacks of crossbeam microscopes and align them with
data from light microscopes. The orientation of these stacks slightly differs from standard z-stack
acquisition, as the acquired images are tilted by a certain angle compared to a usual z-stack. The
import calculates this tilt from the metadata of the image. If the import finds no metadata con-
cerning the tilt angle and the user does not enter a value, it uses a default of 90 degrees. Alterna-
tively, you can enter the angle of your sample during import, if you know it, and the import then
calculates the tilt angle based on this sample angle.
During import, the XY offset metadata of the individual slices is ignored by default and only the
offset of the first tiff file is considered. This default avoids the creation of a slanted z-stack in case
some slices contain incorrect metadata. Activate the checkbox consider individual slice offsets
on the import dialog to change the behavior.
9. Click on Open.
The FIB stack is now imported into the selected session.
Note: When importing larger image files it may take a while until the entire stack is visible in the
viewer. This also applies when you open a project that contains such larger stacks.
Note: To improve the alignment of the z-stack, the TrakEM2 format is supported. Before import-
ing the SmartFIB z-stack, use a specific fiji script (for information, see https://imagej.net/Regis-
ter_Virtual_Stack_Slices#API_documentation) that creates xml-files with TrakEM2 format within
the transferred input folder. Those xml-files are then considered during the import and replace the
stage-position information with the computed pixel delta x/y shifts. Only the stage-position of the
first image is used for absolute positioning, all other images of the stack are positioned according
to this first slice and the computed shifts.
See also
2 Non Image Data [} 624]
You can export data of ZEN Connect projects as a single image for distribution to collaborators,
or for the use in publications. The content can be a single image, tiles, a collection of images, or a
view of the entire ZEN Connect project. You can drag or resize the region to control the area
that you want to export, or activate if image names and frames are shown on the exported image
or not. You can pan and zoom using the mouse in the Image View to get fine control of the ex-
port area.
à A wizard opens.
2. Make your settings and click Export Data.
3. Navigate to the folder where you want to store the exported image. The default file name
is the ZEN Connect project name. Click Save.
You have exported one image in a standard image format. The exported image is based on the
export area you set up in the Image View.
See also
2 ZEN Connect Single Image Export Wizard [} 659]
2 ZEN Connect Tool [} 650]
See also
2 ZEN Connect Tool [} 650]
2 Video Export Wizard [} 661]
ZEN offers the functionality to export image data as an MRC file which makes it compatible to the
software application SerialEM and available for TEM users. This export is available for z-stacks and
multi-channel images with all the common pixel types (8/16 Bit, 32 Bit Float, RGB 24/32/48). Tiles,
time series, multi-scene images, and images with unprocessed data or special dimensions are not
supported. For images with more dimensions, you can use the image processing function Create
Image Subset [} 108] to extract a single image or stack and then export it with this function.
The export creates a MRC file as well as a NAV file, which contains regions of interests (e.g.
points, rectangles, polygons,...). The NAV file can be loaded in the SerialEM software and it then
loads the MRC file, so that the image with the respective regions is shown.
6.16.10 Alignment
In a ZEN Connect project, you can manually align images in your workspace to correct their posi-
tion or size with respect to the samples. To do so, you activate the alignment process and start
aligning image data. Within a ZEN Connect project, you can also calibrate your system using a
sample holder with fiducial markers by moving between the markers and confirming their posi-
tions.
The alignment process lets you align your current session with fiducial marks or previous images.
You can align image data manually.
You should create a new session any time the alignment of the sample in the microscope has
been disturbed.
à As long as the alignment process is not activated, this is indicated with a little lock next
to the cursor.
2. Right-click the selected image and select Align Data.
Alternatively, right-click the image(s) in the ZEN Connect tool and select Align Data. You
can also select Align for the Alignment button and click it.
You have activated the alignment process for one or more images. The Alignment Tab [} 656]
below the Image View is displayed.
You can start aligning image data. If you start an alignment on a session node, the set alignment
is used for all current and future images of the session. You can use this if you change your sam-
ple between different systems and want to align their coordinate systems to each other.
See also
2 ZEN Connect Tool [} 650]
In the alignment process, you have various options to align image data. Note that you can change
the alignment mode during the alignment process. The alignment edits you have made are pre-
served, but you have to restart the pinning process if you have inserted any pins before changing
the mode.
Note: The alignment process can be executed multiple times. Each time you run the alignment
process, the end result of the last alignment is used as the starting point for the new alignment. If
the initial image was far out of alignment at the start, it is easiest to do the alignment process
once roughly, and then do the alignment process a second time with more precision. The second
alignment will use the first alignment as a starting point, and will allow you to establish a more
precise alignment quickly.
Prerequisite ü You have loaded a ZEN Connect project and activated the alignment process.
1. In the Alignment tab, select one of the following alignment modes and the region you
want to align.
Translate Only
1. Click and drag with the mouse to translate the image you are aligning with respect to ev-
erything else.
à You can zoom in and out with the mouse wheel, or press and hold the CTRL key to pan
while you are in the process of aligning the image.
à After you insert the first pin, your input will rotate the item around the first pin, when
dragging it with the mouse.
à After you insert the second pin, your input will also stretch and shear the item.
Image data from microscopes should not need to be stretched or sheared to perform alignment.
If you need to provide much input after inserting the second pin, this might be an indication of
other problems, such as equipment calibration issues.
Alignment Handles
1. If you select Alignment Handles, you can use handles to rotate, translate, and scale the
image.
Reset alignment
1. Click on the Reset button to reset the alignment you performed.
à The alignment is reverted as it was when you started aligning. The alignment mode is still
activated.
Cancel alignment
1. Click on the Cancel button to reset the alignment you performed.
à The current alignment is cancelled and reverted to the alignment in place before you
started the alignment mode. The alignment mode is not activated any longer.
Finish alignment
1. Click on the Finish button to finish the alignment mode and to save the alignment informa-
tion.
Clear alignment
1. Click on the Clear button.
à The session is restored to its un-aligned state.
See also
2 Aligning Images in Z Direction [} 641]
2 Rotating a Z-Stack [} 643]
2 Alignment Tab [} 656]
In ZEN Connect you can not only align your images/ sessions in x and y direction, but also in z.
Prerequisite ü You have opened a ZEN Connect project containing images/ z-stacks with z information.
1. Select the image or session you want to shift in z direction.
2. In the ZEN Connect tool, select Align for the Alignment button and click it. Alternatively,
right click on the image and select Align Data.
à The Alignment Tab [} 656] is displayed below the Image View.
3. For Alignment Mode select 3D Alignment in the dropdown list.
4. For Relative Z Offset set the value for your shift in z direction.
5. Click on Finish.
You have now aligned your data in z direction. For an illustration of the alignment see Example
for Z Alignment [} 642].
Note that for the z alignment the view of the aligned stack remains the same, whereas the view
of the other stacks changes.
Prerequisite ü You have opened a ZEN Connect project containing images/ z-stacks with z information.
1. Activate the Global-Z slider and move to the z-position where you want your image to be
placed.
2. Select the image you want to shift in z direction.
3. In the ZEN Connect tool, select Align for the Alignment button and click it. Alternatively,
right click on the image and select Align Data.
à The Alignment Tab [} 656] is displayed below the Image View.
4. For Alignment Mode select 3D Alignment in the dropdown list.
5. On the Alignment tab, click on the Set to current Global-Z button.
6. Click on Finish.
You have now set the center of your z-stack to the currently selected Global-Z.
This chapter serves as an illustration of how and what is happening for z alignment in ZEN Con-
nect. Consider the following situation:
This image illustrates two z-stacks with five planes and different z coordinates (with µm as unit).
The green line simulates the position of the Global-Z slider. Here, the Correlative Workspace
would show you plane 1 of the first stack and an empty frame where the second stack would be
because the Global-Z is beyond the range of the second stack.
All z planes of Stack 2 are shifted by 3µm and now the z-plane 1 of Stack 1 and z-plane 3 of Stack
2 would be visible in the Correlative Workspace.
Now the first stack is out of range of the Global-Z slider, so the Correlative Workspace would
show you plane 1 of Stack 2 and an empty frame where Stack 1 is located.
In the alignment mode you can also perform a three dimensional rotation of a z-stack.
With the ZEN Connect 3D view you can display and align two z-stacks as 3D volumes. This align-
ment mode allows a translation of the volumes in x, y and z.
Prerequisite ü You have opened two z-stack images in the ZEN Connect 3D view or directly in 3D alignment
mode, see Opening Images in the ZEN Connect 3D View [} 623].
1. In the Connect 3D tab, click the button for the volume you want to align, e.g. Align Vol-
ume 1. If you opened the 3D view directly in alignment mode, one volume is already se-
lected for alignment.
à You are in alignment mode and the Alignment tab is displayed.
2. To mirror the selected image, go to the Alignment tab and click to mirror the image
horizontally, or click to mirror it vertically.
3. In one of the 2D image views, shift the volume via drag and drop and align it with the sec-
ond volume as reference.
à The changes are automatically updated in all views, including the 3D view.
4. If you are satisfied with your alignment, go to the Alignment tab and click Apply.
Prerequisite ü You have opened a ZEN Connect project with non-image data.
ü Your non image data is toggled visible, see also Moving or Hiding Images [} 630].
1. In the ZEN Connect tool or in the image view, right-click the non-image data and select
Align Point Position. Alternatively, in the ZEN Connect tool, select the non-image data
and select Align for the Alignment button and click it.
à You enter the alignment mode.
2. In the image area, click at the position where you want to place the none-image data.
3. Click on Finish.
You have aligned your non image data.
See also
2 Non Image Data [} 624]
Info
Alignment Handles
You can also use the alignment manages of the selected image(s) to translate, scale, and ro-
tate the image.
Info
View Options
You can use the available options of the Dimensions tab [} 1029] to adjust the image view
(e.g. use the Global-Z slider). The image which is adjusted by the Dimensions tab can be se-
lected in the tree view of the Select Node For Dimensions tab.
3. In the list on the left, use the button to add as many points as necessary for your
point alignment.
With one point only a translation operation is possible, with two translation and rotation,
and three or more points enable all transformations.
4. In the Algorithm dropdown list, select the alignment operations you want to perform.
5. Click on Draw for the first point.
à You enter the drawing mode for the first point.
6. In the Image Window on the left, click to set the point in your image(s) (Subject point).
7. In the Project Window on the right, click to set the corresponding location for this point
in the project (Reference point).
à Both color makers in the table are green and the first point for the alignment is set suc-
cessfully.
8. Repeat these three steps for every point you add/need.
9. If you want to redraw a point pair, click on Redraw and click to set the new positions in
both windows.
10. Click on Next.
à The second step of the wizard opens. It displays a preview of the final alignment result
and values for the parameter changes.
11. If you want to change the alignment, click on Back to get back to the previous step. Other-
wise, click on Finish to save the alignment and close the wizard.
You have aligned the selected image(s) in the ZEN Connect project.
4. In the list on the left, click to add as many points as necessary for your point align-
ment.
With one point only a translation operation is possible, with two translation and rotation,
and four or more points enable all transformations.
5. In the Algorithm dropdown list, select the alignment operations you want to perform.
6. Click Draw for the first point.
à You enter the drawing mode for the first point.
7. In the Image Window on the left, click to set the point in your image(s) (Subject point).
8. In the Project Window on the right, click to set the corresponding location for this point
in the project (Reference point).
à Both color makers in the table are green and the first point for the alignment is set suc-
cessfully.
9. Repeat these steps for every point you add/need.
10. If necessary, use the controls in the Dimensions tabs, e.g. to switch to a different z-slice to
set the points on specific z-levels.
11. If necessary, change the left image view via the dropdown to add or check points in other
2D image dimensions.
à The view changes to the selected 2D image dimensions.
12. If you want to redraw a point pair, click Redraw and click to set the new positions in both
windows.
13. Click Next.
à The second step of the wizard opens. It displays a preview of the final alignment result
and values for the parameter changes.
14. If you want to change the alignment, click Back to get back to the previous step. Other-
wise, click Finish to save the alignment and close the wizard.
You have aligned the selected image(s) in the ZEN Connect project.
1. In the button bar below the Image View, click the Zoom to Extent icon.
Alternatively, click on the Context menu button and select Zoom to extent.
In the Image View, the sample holder is centered. All images in the ZEN Connect project are
displayed.
See also
2 ZEN Connect Tool [} 650]
1. In the button bar below the Image View, click the Pan & Zoom icon.
à With your mouse you can pan and zoom in and out in the Image View.
See also
2 Button Bar below Image View [} 659]
You select a region to later apply the Alignment Mode to the image contained in this region.
In the ZEN Connect tool, both in the Project view and in the Layers view, the image within the
selected region is highlighted.
For a better overview, you can toggle the display of the image name and of the frame of images
in the Image view of your ZEN Connect project .
The sample is usually mounted on a carrier or directly on a sample holder. Select the appropriate
sample holder for your configuration when you configure your project.
We offer specific sample holders and carriers with certain markers, e.g., "L"-markers or others.
These CorrMic sample holders are necessary for a Shuttle & Find workflow. Note: If you change
the holder/carrier after a S&F calibration, the S&F calibration needs to be redone.
à The frame of the selected template is displayed in the Image View of your ZEN Con-
nect project.
3. To deselect the carrier/holder, click the Select Carrier/Holder drop down list and select
Clear/Carrier Holder.
For information on correlative sample holders, see Correlative Sample Holders [} 695].
See also
2 Select Template Dialog [} 656]
You can create an image from the loaded ZEN Connect project.
ZEN Connect offers the functionality to draw and delete regions of interest (ROI) in a project. The
regions are shown in the project and listed in the Regions Tool [} 663] and they are saved and
loaded together with the project.
1. In the button bar below the Image View, click on the button.
2. Draw a rectangular region into your project.
The Region of Interest is displayed in your project.
Prerequisite ü You have loaded a ZEN Connect project with regions of interest.
1. In the Regions tool, select your region in the list and click on the button.
Alternatively, right-click on your region of interest and select Delete.
The selected region is deleted.
Prerequisite ü You have loaded a ZEN Connect project with regions of interest.
1. In the Regions tool, select your region in the list and click on the button.
2. Enter a name for the selected region and press Enter.
The region is renamed according to your input.
Prerequisite ü You have loaded a ZEN Connect project with regions of interest.
1. In the Image View or the list of the Regions tool, right-click on the region of interest and
select Zoom To.
The view zooms to the selected region.
Prerequisite ü You have loaded a ZEN Connect project with regions of interest.
1. In the Regions tool, double click on an entry of a region. Alternatively, select an entry and
click , or right click the entry and select Move to.
The stage moves to the center of the respective region.
See also
2 Editing Measurements in a ZEN Connect Project [} 650]
Prerequisite ü You have opened a ZEN Connect project and added a measurement, see Adding Measure-
ments to a ZEN Connect Project [} 649].
1. In the button bar below the Image View, click .
à You are now in edit mode.
2. To resize measurements, click on the end points of a measurement to drag and drop them.
In case of an area measurement, you cannot rescale the contour.
à The measurement values are updated instantly.
3. To move a measurement, drag and drop the respective measurement in the Image View.
4. To change the Name of a measurement, double click on the current name.
à The name in the field is editable.
5. Enter the desired name and press Enter.
à The measurement is renamed.
6. To change the color of the measurement itself or the text showing the measured value, in
the Measurements tab, click the color field for Line or Text respectively.
à A color selection window opens.
7. Select the desired new color from the list, or click Custom to select a customized color.
à The color of the measurement or the text is updated instantly.
8. To toggle the visibility, click for the respective entry in the list.
à The measurement is toggled (in-)visible.
See also
2 Measurement Tab [} 658]
The ZEN Connect tool provides a layer's view and a project view of image data that you have ac-
quired for the ZEN Connect project. Every image that you have acquired for the ZEN Connect
project is listed. As you acquire or import more image data, the new image data is listed in the
views.
The ZEN Connect tool offers different options to open a ZEN Connect project or to create a
new ZEN Connect project.
Alternatively, you can create a new ZEN Connect project via File > New Document. For more
information, see New Document Dialog [} 817].
The ZEN Connect tool displays the following:
§ Images that have been acquired for the ZEN Connect project.
§ Images that have been imported to the ZEN Connect project.
§ Position of the image in the project or the layers.
§ Non image data added to the ZEN Connect project.
1
2
3
1 Sample holder
Here: Custom Carrier 1
2 Imported Data
Subordinated, the data added to the ZEN Connect project is displayed. These images
are not acquired within the Connect project.
3 On top, the session node is displayed. It contains the following information. Subordi-
nated, the acquired images are listed.
Here: LM (Microscope)
Here: 20190326 (Date <yyyymmdd>)
Here: 125358 (Time <hhmmss>)
Here: New-01-02 (Image taken in this session)
4 Image Information
Displays the objective magnification with which an image was acquired (here: 40x). Ad-
ditionally, the icon is displayed for a z-stack image.
5 Z-Stacks which are opened in the ZEN Connect 3D view [} 621] are displayed with an-
other icon in the tree view.
Images in your ZEN Connect project are displayed in the Image View according to its position in
the Layers View. With drag & drop, you can move images over and under other images. You can
also hide them completely . For more information, see Moving or Hiding Images [} 630].
Additionally, you can see the objective magnification with which the image was acquired and if
an image is a z-stack .
Parameter Description
Opacity Changes the opacity of layers. The overall opacity determines to what
degree it obscures or reveals the layer beneath it. A layer with 1%
opacity appears nearly transparent, whereas one with 100% opacity
appears completely opaque.
Alignment Only available if you have selected one or more images or a session.
Starts the selected alignment operation for the currently selected im-
age(s) or session. Click on the arrow button to select an alignment.
– Align Activates the alignment process for the selected data, see Activating
the Alignment Process [} 635].
Parameter Description
– Align system Activates the alignment process for the current microscope session,
see Activating the Alignment Process [} 635].
– Manual Align- Opens the wizard for manual alignment, see Aligning Images in the
ment Wizard Manual Alignment Wizard [} 644].
– Point Align- Opens the wizard for point alignment, see Aligning Images in the
ment Wizard Point Alignment Wizard [} 645].
– Point Align- Opens the wizard for 3D point alignment, see Aligning Images in the
ment 3D Wiz- 3D Point Alignment Wizard [} 645].
ard
– Align Two Im- Opens the selected images directly in alignment mode in the ZEN
ages in 3D Connect 3D view, see also Aligning Images in the ZEN Connect 3D
View [} 643].
Import Starts the currently selected operation to add and/or import an image.
Click on the arrow button to select an import operation, see also Im-
porting Data [} 631].
– Add image Opens the file browser to add an image, see Adding an Image to the
ZEN Connect Project [} 626].
– Add image Opens the Stored Documents dialog to add an image from the ZEN
from storage Data Storage, see Adding an Image to the ZEN Connect Project
[} 626].
– Import non- Opens the file browser to select non-image data for an import, see
image data Importing Non Image Data [} 633].
– Import Smart- Opens the file browser to import a SmartFIB z-stack, see Importing a
FIB z-stack SmartFIB Stack into ZEN Connect [} 632].
– Add dataset Opens the file browser to add a dataset, see Adding Datasets when
Adding Images [} 628].
– Single Image Opens the wizard to export the selected data in one single image, see
Export Exporting Single Image Data [} 633].
– Video Export Opens the wizard to export the selected data as a video, see Export-
ing a Zen Connect Project as a Video [} 634].
– Export Project Opens a file browser to export your current ZEN Connect project, see
Exporting a ZEN Connect Project from ZEN Data Storage [} 271].
New Session Creates a new session, see Starting a New Session [} 630].
Context menu
– Zoom to Zooms one or more selected images into the center of the Image
View, see Zooming Images [} 628].
– Zoom to Zooms to the selected image in the Image View and displays it at
100% 100% scale, see Zooming Images [} 628].
Parameter Description
– Show/Hide Toggles the visibility of an image or a session, see Moving or Hiding
Images [} 630].
– Open image(s) Opens the selected image(s) in ZEN, see Opening Images in ZEN
in ZEN [} 629].
– Show in Ex- Locates an image or file on your computer, see Showing an Image in
plorer the Explorer [} 629].
– Remove Data Removes the selected image from the ZEN Connect project, but does
not delete it from the computer, see Removing Images from the ZEN
Connect Project [} 629].
– Rename Data Renames an image, see Renaming Images in a ZEN Connect Project
[} 629].
– Align Data Activates the alignment process for the selected data, see Activating
the Alignment Process [} 635].
– Single Image Opens the wizard to export the selected data in one single image, see
Export Exporting Single Image Data [} 633].
– Video Export Opens the wizard to export the selected data as a video, see Export-
ing a Zen Connect Project as a Video [} 634].
– Add image Opens the file browser to add an image, see Adding an Image to the
ZEN Connect Project [} 626].
– Add image Opens the Stored Documents dialog to add an image from ZEN
from storage Data Storage, see Adding an Image to the ZEN Connect Project
[} 626].
– Import Smart- Opens the file browser to import a SmartFIB z-stack, see Importing a
FIB z-stack SmartFIB Stack into ZEN Connect [} 632].
– Show in 3D Only available if one or two z-stack images are selected in the tree
view.
Opens the images in the ZEN Connect 3D view, see Opening Images
in the ZEN Connect 3D View [} 623].
– New Session Starts a new session. The images acquired next will be subordinated
below a new session node, see Starting a New Session [} 630].
– Clear align- Resets the alignment and places the selected image at its initial posi-
ment tion.
– Zoom to ex- Resets the view space of the Image View to be centered on the
tent holder with a field of view (FOV) that includes all visible images in the
project, see Zooming to Extent [} 646].
– Export Project Opens a file browser to export your current ZEN Connect project, see
Exporting a ZEN Connect Project from ZEN Data Storage [} 271].
Parameter Description
– Import non- Opens the file browser to select non-image data for an import, see
image data Importing Non Image Data [} 633].
See also
2 Aligning Image Data [} 635]
Parameter Description
Celldiscoverer Only available for celldiscoverer application.
Shows a list of celldiscoverer sample holders.
See also
2 Selecting and Clearing Carrier/Holder [} 647]
The alignment tab is visible as soon as you enter the alignment mode. For more information, see
Activating the Alignment Process [} 635].
You can perform Three-Point Alignment:
§ Line-up an imported image with reference marks, such as the precision fiducials on a CorrMic
Holder.
§ Line-up features in an imported image with LM, EM and SEM images of the same features.
§ Line-up a session of LM, EM and SEM imagery with previously acquired LM, EM and SEM im-
agery session.
With the three-point alignment process you can set the position, rotation, and scale of an image
or tile. This is used to line the image or tile up with reference marks or other images. Once an im-
age is lined up, it can be used as a reference (road map) to move the stage to control further im-
age acquisition.
Parameter Description
Alignment Mode Sets which data properties you can change during the alignment.
- Translate Only Moves the item you are aligning in x and y only without changing its
size or orientation.
- Translate and Moves the item in x and y direction and changes its orientation. It
Rotate Only does not change the scale of the item you are aligning.
- Translate, Ro- Moves, reorients, and resizes the item you are aligning. It does not
tate and Scale shear it.
Only
Parameter Description
- Translate, Ro- Supports full three-point alignment.
tate, Scale
and Shear
- 3D Alignment Displays options to align an image in z direction and for three dimen-
sional rotation.
Set to current Only visible, if 3D Alignment is selected and only active if the
Global-Z Global-Z slider is activated on the Dimensions tab.
Sets the z value of the currently selected Global-Z as the z value for
the center of the z-stack.
- Rotation X- Sets the rotation along the x-axis with the slider or input field. Click
Axis on the button to reset the angle.
- Rotation Y- Sets the rotation along the y-axis with the slider or input field. Click
Axis on the button to reset the angle.
- Rotation Z- Sets the rotation along the z-axis with the slider or input field. Click
Axis on the button to reset the angle.
- Angle Step Sets the angle step size for the slider/input fields above.
Size
- View Cube With this cube control, you can rotate the stack interactively. It has a
Control visual representation of the current stack (white box) and the cutting
plane which is displayed in the 2D view above.
- Presets Sets the View Cube Control to a predefined position. You can set it
to the Default, or you can select an orientation from the dropdown
of the button to show the Default, Viewer Perspective, Left, or
Right orientation.
Reset Resets the alignment as it was when you started this alignment opera-
tion.
Parameter Description
Clear Resets the alignment to where it was when the data was first ac-
quired or imported.
Finish Exits from the alignment operation, keeping the alignment you have
established.
Cancel Returns to the alignment as it was when you started, and exits from
the alignment operation.
See also
2 Aligning Image Data [} 635]
2 Aligning Images in Z Direction [} 641]
2 Rotating a Z-Stack [} 643]
Parameter Description
Measurement
Tools
Measurement ta-
ble
– Line Displays the color of the measurement. A click on the color field
opens a selection window to change the color.
– Text Displays the color of the measurement text. A click on the color field
opens a selection window to change the color.
See also
2 Adding Measurements to a ZEN Connect Project [} 649]
2 Editing Measurements in a ZEN Connect Project [} 650]
Parameter Description
Resets the view space of the Image View to be centered on the holder
with a field of view (FOV) that includes all visible images in the
Zoom To Extent project. For more information, see Zooming to Extent [} 646].
Activates the mouse for panning around and zooming in and out in
the Image View. For more information, see Panning & Zooming
Pan & Zoom [} 646].
Select Carrier/ Opens a dialog to select a carrier or sample holder that matches. For
Holder more information, see Selecting and Clearing Carrier/Holder [} 647].
Grab Image Creates an image of the ZEN Connect project. For more information,
see Grabbing an Image [} 648].
With the ZEN Connect Single Image Export wizard, you configure the parameters of the image
you want to export.
Parameter Description
Color Style Controls the color format of the export. Select RGB (color) or Inten-
sity (black and white).
Parameter Description
- RGB Color Based on the RGB (Red-Green-Blue) color model. Be aware that color
files may be up to three times as large as intensity files.
Export Format Selects the format for the exported file. Only the file formats that sup-
port the number of pixels you are exporting will be shown. If your ex-
port is too large, formats like BMP, JPG and TIFF are not displayed. If
you wish to export to one of these formats, you must pick a larger
pixel size, or smaller export area for your export.
- Raw Image Exports the image as a raw binary dump of the pixel values. An XML
file is also written detailing the image width and height in pixels, the
pixel size in microns, the bits per sample and samples per pixel. Raw
files are not limited in size.
- CZI Image Exports the image as a Carl Zeiss Image file. CZI files are not limited in
(Single Chan- size. The images are exported in a single channel CZI.
nel)
- CZI Image Exports the image as a Carl Zeiss Image file. CZI files are not limited in
(Multi-Chan- size. The images are exported as a multi-channel CZI.
nel)
Note that the resulting image might look different when reopened in
ZEN than it does in ZEN Connect.
- Tif Image Exports the image as a standard TIFF file. TIFF files are limited in size.
- Tif Tiles With this option, the export is in TIFF format, but broken into 2K x 2K
tiles saved as individual TIFFs. An XML file is written listing the file
names of the tiles and their positions. This export option is unlimited
in size, but designed for someone who is writing scripts to import the
data into image processing applications or similar.
- Bitmap Image Exports the image as a standard Windows Bitmap. Bitmap files are
limited in size.
- Jpg Image Exports the image as a standard JPEG file. JPEG files are limited in size.
Burn-in Data Bar Activated: Burns the currently configured data bar into the exported
image.
Show Region Cap- Controls if image names and frames are shown in the exported im-
tion age.
JpgXr Compression Activated: Uses the (lossy) JpgXr compression of the image during
Quality export and sets the quality for the compressed image with the input
field.
Rotation Rotates the view to the desired orientation. Drag the slider, or in the
text field, type in the value.
Pixel Size Sets the pixel size of the export. The smaller the pixel size, the more
disk space your export will take. The options Low, Medium and High
only have an effect, if several images are exported as a single image.
- Smallest Sets pixel size of the coarsest image, i.e. the smallest pixel size.
- Medium Calculates and sets the pixel size to the log average of finest and
coarsest image (i.e. 10^(0.5*(log(fine)+log(coarse)).
Parameter Description
- Largest Sets pixel size of the finest image, i.e. the largest pixel size.
- Custom Sets a custom pixel size which can be entered in the input field.
Width (px) Sets the width to directly alter the export pixel count and export area
(the pixel size is unchanged).
Height (px) Sets the height to directly alter the export pixel count and export area
(the pixel size is unchanged).
Width (µm) Displays the full width of the export area in µm.
Height (µm) Displays the full height of the export area in µm.
Approx. Data Size Displays an approximation for the data size of the exported image.
The actual file size after export may be less, depending on compres-
sion of some file formats.
Export Data Opens a file browser to export the image with your settings.
For more information on exporting single images, see Exporting Single Image Data [} 633].
Parameter Description
FOV Width Displays the current width of the field of view and allows you to enter
a value.
Cancel Closes the dialog box without setting the field of view.
Parameter Description
Pixel Size Displays the current pixel size and allows you to enter a value.
Cancel Closes the dialog box without setting the pixel size.
Parameter Description
Burn in Data Bar Burns the currently configured data bar into the exported video.
Parameter Description
Show Region Cap- Controls if the image names are shown in the exported video.
tion
Show Region Out- Controls if the image frames are shown in the exported video.
line
Export Resolution Sets the resolution and format for the video export.
Resolutions available in the drop down list:
§ 320 x 240 (4:3)
§ 428 x 240 (16:9)
§ 640 x 480 (4:3) (=default value)
§ 854 x 480 (16:9)
§ 960 x 720 (4:3)
§ 1280 x 720 (16:9)
§ 1440 x 1080 (4:3)
§ 1920 x 1080 (16:9)
Zoom to Extent Places the image to the center of the preview area.
Rotation You can use the slider or the input field to rotate the view.
Start Delay Sets the delay at the start of the video. The default setting is 1,0 sec-
onds.
Key Frames Lists the key frames, including the data of the position and FOV.
Reset key Sets the values of the selected key frame to the current view.
frame to current
view
Options
Transit To Sets the transition time for zooming to a selected key frame. The de-
fault setting is 3,5 seconds.
Pause At Sets the time to stay at the selected key frame. Default setting is 0,5
seconds.
Parameter Description
Return to first at Returns to the first key frame at the end of the video.
end
See also
2 Exporting a Zen Connect Project as a Video [} 634]
This tool displays a list of regions of interest (ROI) which are drawn into a ZEN Connect project.
Parameter Description
ROI List Here you see a list of all ROI in your ZEN Connect project.
A double click on an entry moves the stage to the center of the re-
spective ROI.
See also
2 Using Regions of Interest in Zen Connect [} 648]
1 2
1 Alignment Parameter
Control parameters to align the respective image in the project. For more information,
see Alignment Parameter Section [} 664].
2 Image View
Displays the images of the project and allows alignment of images.
3 View Options
Here you have the general options of the Dimensions tab [} 1029]. The options are al-
ways those of the image selected in the Select Node For Dimensions tab.
See also
2 Aligning Images in the Manual Alignment Wizard [} 644]
Parameter Description
Translate
– Step Size Sets the step size for the translation in x and y.
– X-Direction Sets the translation in x direction. The reset button resets the
value to the default.
– Y-Direction Sets the translation in y direction. The reset button resets the
value to the default.
Scale
Parameter Description
– X-Direction Sets the scaling factor in x direction. The reset button resets the
value to the default.
– Y-Direction Sets the scaling factor in y direction. The reset button resets the
value to the default.
Rotate
– Rotation Cen- Defines the rotation center around which the image is rotated. It is in-
ter dicated in the Image View with a pin.
– Angle Sets the rotation angle. The reset button resets the value to the
default.
– Custom You can set the pin in the image to your custom rotation center.
Flip/Mirror
Shear
– Enable Activated: Activates the shearing mode and displays the three shear-
ing pins in the image.
Reset All Parame- Resets all alignment parameters in the wizard to the default values.
ters
Parameter Description
Finish Saves the changes and closes the wizard.
See also
2 Aligning Images in the Manual Alignment Wizard [} 644]
This wizard guides you through a three-point alignment of the image in your ZEN Connect
project.
See also
2 Aligning Images in the Point Alignment Wizard [} 645]
1 2 3
2 Image Window
Displays the image(s) you are aligning. Area where you set the subject points.
3 Project Window
Displays all the images of the project except the one you are aligning. Area where you set
the reference points.
See also
2 Aligning Images in the Point Alignment Wizard [} 645]
Parameter Description
Point list
Parameter Description
§ Yellow: You are in drawing mode and have not yet set the refer-
ence point in the Project Window.
§ Green: You have set the reference point.
– Draw Only visible if no points have been drawn for the current point entry.
Enters the drawing mode for the respective point.
– Redraw Only visible if you have already drawn the reference and subject point
for this entry.
Reenters the drawing mode to redraw the points for this entry.
– Deletes the currently selected list entry and removes all drawn points
of the entry.
Delete
– Autoselect Automatically selects the algorithm suitable for the drawn points.
– Translation Moves the item you are aligning in x and y only, without changing its
size or orientation.
– Translation Moves the item in x and y direction and changes its orientation. It
and Rotation does not change the scale of the item you are aligning.
Parameter Description
Next Moves on to the next step of the wizard.
See also
2 Aligning Images in the Point Alignment Wizard [} 645]
This step displays a preview of the finished alignment and the parameter values of each align-
ment.
Parameter Description
Algorithm Result Displays the resulting alignment changes.
Scaling Displays the resulting scaling factor for the X-Dimension and Y-Di-
mension.
This wizard guides you through a 3D point alignment of the image in your ZEN Connect project.
See also
2 Aligning Images in the 3D Point Alignment Wizard [} 645]
1 2 3
3 Project Window
Displays all the images of the project except the one you are aligning. Area where you set
the reference points.
4 View Options
Displays some general view options of the Dimensions Tab [} 1029] for the respective
windows.
Parameter Description
Point list
– Draw Only visible if no points have been drawn for the current point entry.
Enters the drawing mode for the respective point.
– Redraw Only visible if you have already drawn the reference and subject point
for this entry.
Reenters the drawing mode to redraw the points for this entry.
– Deletes the currently selected list entry and removes all drawn points
of the entry.
Delete
– Autoselect Automatically selects the algorithm suitable for the drawn points.
– Translation Moves the item you are aligning in x and y only, without changing its
size or orientation.
– Translation Moves the item in x and y direction and changes its 2D orientation. It
and 2D Rota- does not change the scale of the item you are aligning.
tion
Parameter Description
– Translation Moves and resizes the item you are aligning.
and 2D Scal-
ing
– 3D Rotation Moves and rotates the item in three dimensions. It does not change
and Transla- the scale of the item you are aligning.
tion
This step displays a preview of the finished alignment and the parameter values of each align-
ment.
Parameter Description
Algorithm Result Displays the resulting alignment changes.
Rotation Displays the resulting rotation angle around the x-, y- and z-axis.
Scaling Displays the resulting scaling factor for the X-Dimension, Y-Dimen-
sion and Z-Dimension.
Before acquiring an image with the light microscope and using it for correlative microscopy, it is
necessary to make general settings e.g. stage calibration, camera orientation, calibrating objec-
tives and setting the correct scaling. Please notice that we do not describe all these topics within
this guide as we focus on the Shuttle & Find workflow only.
Furthermore we will not describe basic functionality of the software in this guide, like program
layout or general image acquisition topics.
See also
2 Shuttle & Find Sample Positions at the Electron Microscope [} 678]
For correlative microscopy with light microscopes ZEN software has to be installed. In addition
you need to licence the Shuttle & Find modul.
1. To start the software double click on the ZEN program icon on your desktop.
à The software starts now.
à Check that Shuttle & Find is activated under Tools > Modules Manager ....
2. In the Left Tool Area switch to the Acquisition tab and activate Shuttle & Find.
3. Open the Shuttle & Find tool.
You have successfully started the software. Now you can start working with the Shuttle & Find
module.
Prerequisite ü You have activated Shuttle & Find in the Experiment Manager.
ü You are in the Shuttle & Find tool.
1. Click on the Select… button to open the Select Template dialog and to choose the cor-
relative holder you want to use. Different types of correlative holders are available, see Ap-
pendix Correlative Sample Holders
2. In the Select Template dialog select the correlative holder you want to work with. If you
want use your own sample holders, click on the Add button below the list and fol-
low the instructions in the chapter Defining a new sample holder template [} 674].
With this dialog you can define new correlative holders in addition to the existing holder tem-
plates. It is not mandatory to use correlative holders from ZEISS. User-defined correlative holders
with 3 fiducial markers can be used as well.
1. To open the dialog click on Add in the Select Template dialog. This dialog can be
opened via the Shuttle & Find tool.
à The New Template dialog opens.
2. Type in a name for the new holder or sample carrier. An image of the new holder can be
loaded as well.
3. Insert the distances (in millimeters) between the first and the second marker and between
the second and third marker.
à The distances can be determined using the Stage Control dialog accessible via the Light
Path tool in Right Tool Area tab. We recommend doing this before starting the New
template dialog. Write down the distances to be prepared to enter them within the New
Template dialog.
à Activate the live view in the Center Screen Area by clicking on the Live button in the Lo-
cate tab.
à Navigate the stage manually to the calibration marker on the sample holder by means of
the joystick and note the x/y-coordinates of the marker.
à Repeat this procedure for all three markers and calculate the distances between marker 1
and marker 2 and between marker 2 and marker 3, respectively.
Correlative sample holders have three fiducial markers enabling a three point calibration (signed
with the numbers 1-2-3) The calibration markers consist of one small (length 50 µm) and a large
L-shape marker (length 1 mm). The bigger marker is used for coarse orientation, whereas the
smaller marker is used for the calibration.
1. Click on Live in the Acquisition tab to activate the live view in the Center Screen Area.
2. Navigate the stage manually to the first calibration marker on the sample holder (marked
with No. 1) by means of the joystick. It is enough if you move the stage to the larger L-
shaped calibration marker. The smaller marker will be detected automatically within the
Sample Holder Calibration Wizard. To locate the marker positions we recommend using
a dry objective with low magnification (5x – 20x).
à After setting marker position 3 you will find a green check mark icon which shows that
the calibration was successful.
Basically image acquisition is performed as you are used to do it within ZEN software. The file for-
mat for Shuttle & Find data is the common *.czi file format. Saved images can be loaded in ZEN
via the menu File > Open.
After image acquisition the next step in the correlative workflow is to define/draw in ROIs/POIs in
your image. Therefore you can use the Region tools on the S&F tab, see Regions, Find and Di-
mensions [} 689].
Now you can transfer (Shuttle) the sample and the LM (Light Microscope) image file (.czi) to the
SEM (Scanning Electron Microscope). There you can easily relocate (Find) the same sample posi-
tions and acquire a corresponding image within the ZEN SEM software. Therefore exactly the
same steps have to be done as for the light microscope.
For imaging your sample in the SEM, insert the sample holder (2) in the special SEM adapter (1)
and mount it to the SEM.
Info
The arrow of the sample holder has to face the arrow of the SEM adapter.
For correlative microscopy with scanning electron microscopes SmartSEM and ZEN SEM have to
be installed. SmartSEM is still the control software of the scanning electron microscope. ZEN SEM
comes as an add-on for SmartSEM to perform correlative microscopy and using Shuttle & Find on
a SEM.
à You will see the program interface with a reduced user interface comparing to the soft-
ware. In the Left Tool Area, the SEM Acquisition tab and the Processing tab are avail-
able only. On the SEM Acquisition tab you will find the Shuttle & Find tool which has 3
additional buttons at the lower part of the tool.
This step is exactly the same step like for the light microscope, so please read the chapter Select-
ing the Sample Holder [} 673] if you want to know the exact steps which you have to perform.
Like the step before this step is exactly the same like for the light microscopy, so please refer to
the chapter Calibrating the sample holder [} 675] for details.
Info
4 The calibration of the sample holder has to be done on both systems the LM and the SEM.
Otherwise the relocation of your sample positions or ROIs/POIs stored in the image won`t
be successful.
4 Note that for Shuttle & Find the beam shift must be switched off. The beam shift is deacti-
vated in SmartSEM as follows:
ð Call up the shortcut menu Center Point/Feature by right-clicking on the Stage prop-
erty page.
ð Select Center Point/Feature and select Stage only.
à In the left image container you see the live image from the SEM. The right image con-
tainer is empty.
6. Drag the loaded LM image from the Images and Documents gallery into the empty image
container.
Now you can easily relocate sample positions by double clicking within the image or on the ROI/
POI button (if ROI/POI are drawn in and selected) on S&F tab.
For image acquisition you have to use the Snap button within ZEN SEM. Notice that we will not
describe setup and image acquisition with the SEM. Please read the online help or user guide for
the SEM software.
The precision of relocation can be improved by determination of an offset value. This value de-
scribes the position offset between the loaded image and the live image. The defined offset value
is only valid for the loaded image. If another image is loaded or if you close the dialogue, the off-
set value will be deleted.
Prerequisite ü An offset is visible when you try to relocate marker positions on the live image comparing to
the LM image.
1. Click on the Set Offset button.
à The stage moves to the selected marker position. Then a message appears which asks
you to move the stage to the correct position.
2. Move the stage manually to the correct position by using the joystick.
3. Confirm the message by clicking on the OK button.
Now you can repeat the relocation. The positions should be identical now.
Prerequisite ü You have acquired and loaded two images containing S&F calibration data (e.g. LM/SEM) to
be correlated. If the images are not oriented identically, you can use the Mirror Image but-
tons under Options on the S&F Correlation tab.
ü You see the two images next to each other (splitter view) in the center screen area. If not,
drag your images from the Images and Documents gallery into the center screen area.
1. Click on the Set correlation points button in the S&F Correlation tab.
à The cursor will change to a pipette symbol.
2. Click in the left image to set a correlation point. Set all 3 marker points in the left image
first, before you set the corresponding 3 markers in the right image. If a correlation point is
set, a check mark icon will appear in front of the corresponding point.
à Make sure that the positions in both images are identical. After you have set all 6 points
the cursor will be changed backwards from the pipette to the arrow.
3. Click on the Create Correlation button.
The correlated image will be generated and opened in a new image container.
To use Shuttle & Find (SW and correlative holders) with an EVO 10 make sure that the stage limits
(for x, y and z) are set as follows:
Holder Positions
The holder positions must be oriented like shown in the images
NOTICE
If you set a wrong orientation the stage cannot be moved to all correlative markers because of
the stage limits for the EVO 10.
4 The holder has to be mounted into the EVO in that the way that the correlative markers
(1) and (2) have to be near the chamber door whereas marker (3) is located furthest from
the chamber door (see Mounting A/B).
4 If necessary, the SEM image can be rotated according to the LM image using the option
Scan Rotate in SmartSEM.
Mounting A: Mounting B:
See also
2 Sample Holder Calibration Wizard [} 692]
Here you choose and calibrate your sample holders. The tool is visible only if you have activated
the Shuttle & Find checkbox in the Experiment Manager.
Parameter Description
Sample holder Here you see the name and preview of the selected sample
holder.
Select... Opens the Select Template dialog. There you select the pre-
ferred sample holder or define new holder templates, see Select-
ing the Sample Holder [} 673].
Parameter Function
Scale bar Adds a scale bar to the snapped (acquired) image.
Besides the Shuttle & Find tool in the Left Tool Area, the S&F (Shuttle & Find) view is visible
in the Center Screen Area of the ZEN software. If the S&F view is selected, the S&F tab and
S&F Correlation tab will appear as specific view options under the image area.
Here you find helpful options and tools to draw in and relocate regions of interests (ROIs) or
points of interest (POIs) within the sample image.
6.17.8.1.1 Options
Parameter Description
Mirror Image Here you can mirror the image horizontally or vertically
by using the two buttons at the right. The alignment of the
images depends on the microscope (upright/inverted) and
orientation of the sample holder.
Keep tool Activated: Keeps the current tool active. That is helpful if
you want to draw in more than one ROI/POI.
Auto color Activated: Uses a new color for each new element which is
drawn in.
Snap to Pixel Activated: Draws in graphical elements using the pixel grid.
Use fine calibration value Activated: Uses the measured fine calibration.
The precision of relocation and therefore the quality of the
overlay image can be improved by determination of an off-
set value. This value describes the offset between the
loaded image and the live image. The defined offset value is
Parameter Description
only valid for the loaded image which you can see in the
container. If another image is loaded or if you close the dia-
logue, the offset value will be deleted.
Determine the offset by identification of a POI (Point Of In-
terest) within the snapped image. To identify a POI use the
buttons in the Regions section. By clicking on the Set Off-
set button, the stage moves to the supposed sample posi-
tion. Compare the sample position within the live image
with the set POI and correct the stage in that way that both
shown positions are identically. Confirm the fine calibration
with the OK button. Now the fine calibration is measured
and the checkbox is activated.
More information, see Fine Calibration of the Sample
Holder [} 682].
Double click in image to Activated: Moves the stage to the position you have dou-
move stage ble clicked on.
Refocus after stage move- Activated: Adjusts the focus automatically after the stage
ment has moved.
Move stage in z-direction Activated: Moves the stage to the load position before it
before x/y movement moves to the next correlative calibration marker.
Show splitter view Activated: Activates Splitter Mode in the Center Screen
Area.
Parameter Description
Selects the ROIs or POIs in the image area. If you are currently in an-
other mode, you can switch back to the Selection mode using this
Selection mode
button.
Parameter Description
Draws in a marker point (Point of Interest (POI)).
Draw marker
Dimension section
Here you see coordinates and dimensions of the selected graphical element in the list. If the
Scaled checkbox is activated, the unit is µm, otherwise Pixel.
§ Parameter X: Shows the horizontal position (x coordinate) of the center of the graphical ele-
ment.
§ Parameter Y: Shows the vertical position (v coordinate) of the center of the graphical element.
§ Parameter W: Shows the width of the graphical element.
§ Parameter H: Shows the height of the graphical element.
Parameter Description
Eye symbol Shows or hides the ROI/POI in the image.
Type Displays the icon for the tool type (ROI/POI). To format a graphic ele-
ment, double-click on the icon. The Format Graphic Elements dia-
log opens.
Parameter Description
Name Displays the name of the graphic element. To change the name, dou-
ble-click in the Name field. Then enter the text of your choice.
Parameter Description
Transform Here you select which image will be transformed. Choose via the Left
Image/Right Image buttons, which image should be transformed in
the other. During transformation, a pixel in the overlay image is calcu-
lated by using pixels of the two original images that shall be overlaid/
merged.
Interpolation Here you can select one of the following interpolation methods:
- Nearest The gray value of the resulting pixel in the overlay image is made of a
Neighbor pixel which is located next. This interpolation method is very fast.
- Cubic The calculated pixel in the overlay image is assigned to a gray value,
which is calculated by means of a polynomial function using gray val-
ues of pixels in the original images; these pixels are located nearby the
calculated pixel.
Mirror image Here you can mirror the image horizontally or vertically. Therefore
simply click on the corresponding button.
Mirroring an image is necessary, when the loaded image shows a dif-
ferent orientation than the live image.
Show Correlation Activated: Opens the correlated image in a new image document/
new container.
Parameter Description
Set correlation Enables you to set 6 points (3 points in each image) as correlation
points markers in a row, see Correlating Two Loaded Images [} 682].
Create Correlation Active only if all correlation points are set in both images.
Creates a correlative overlay image. A third image container with the
correlated image will be opened in the Center Screen Area and the
Show Correlation checkbox will be activated automatically.
Option Description
Save marker images Activated: the marker images are saved during the calibration.
The images can be used to check the calibration afterwards. Click
on the Select Folder (...) button to select a storage folder.
Move the stage to load Activated: the stage will move to load position before moving
position before x/y to the next correlative calibration marker.
movement
In case of using an AxioObserver, the objective revolver moves to
load position.
Automatic movement Activated: By clicking on the Next button within the wizard the
to next marker stage moves automatically to the next calibration marker.
Use Autofocus at each This option is active only if the Automatic movement to next
marker position marker position checkbox is activated.
Activated: the focus is adjusted automatically after moving to
the next marker position.
Use automatic marker Activated: The software will try to detect the small calibration
detection marker automatically.
Use settings for marker This option is active only if the Use automatic marker detec-
detection tion checkbox is activated.
Activated: shows settings for marker detection (see description
below). Here you select the properties of the calibration markers.
Option Description
Threshold marker de- A low threshold for marker detection is used when the dimen-
tection: high – low sions of the correlative L markers cannot be recognized precisely,
e.g. when the sample holder is slightly filthy.
Marker color Here you select the color of the markers displayed in the live im-
age.
White: the marker is displayed white on a dark background.
Black: the marker is displayed dark on light background.
Auto: the marker color is set automatically.
Marker orientation Here you need to set the orientation of the L-markers on your
sample holder. Click on the corresponding button to select the
orientation of the calibration marker which you can see in the
live image
If you click on the Next button you will move to the next step of the wizard.
In steps 2-4 of the wizard you will be guided through the calibration procedure.
Option Function
Holder position Move to Position 1 button
Moves the stage to marker position 1. This is possible only if the
first position was set before and x/y coordinates are given.
Current button
Only visible for marker position 2 and 3.
Option Function
Moves the stage to the current marker position. This is possible
only if the current position was set before and x/y coordinates
are given.
Stage movement to the Here you can change the movement of the stage in x or y direc-
next marker tion. This is necessary if during calibration the stage moves in the
wrong direction.
Marker position By clicking on the Set button, the actual marker position will be
confirmed.
Name Image
Life Science cover glass
22 x 22
Name Image
Cover glass with fiducials
22 x 22
Name Image
MAT Flat Stubs
MAT Universal A
Name Image
MAT Universal B_A
7 Application Toolkits
Toolkit Included Functionality
Bio Apps § Bio App Cell Counting [} 699]
§ Bio App Confluency [} 699]
§ Bio App Gene and Protein Expression [} 699]
§ Bio App Automated Spot Detection [} 699]
This module offers the functionality to set up image analysis for very specific analysis scenarios
(e.g. to count the number of cells in an image). Each individual analysis scenario has its own appli-
cation.
§ Cell Counting
This Bio App provides a simple automated image analysis workflow customized for counting
of fluorescently labeled cell nuclei in biological samples and a tailored result presentation with
interactive measurement tables, heatmap and plots. This module allows automatic monitoring
of cell numbers and thus proliferation, i.e. under the influence of compound effects. The mea-
surement features include the number and the density of the cell nuclei as well as the mean
intensity and mean area of cells in a well.
§ Confluency
This Bio App provides a simple automated image analysis workflow customized for quantify-
ing the cell confluency using variance-based segmentation and a tailored result presentation
with interactive measurement tables, heatmap and plots. Applications addressed by this mod-
ule include cell confluency assays as a measure of quality control in cell-based assays and
wound healing assays to follow cell migration and cell-cell interaction. The measurement fea-
tures include the covered area and the area percentage.
§ Gene- and Protein Expression
This Bio App provides a simple automated image analysis workflow customized to quantify
gene expression and a tailored result presentation with interactive measurement tables,
heatmap and plots. Applications addressed by this module include:
– Automatic evaluation and quantification of the transfection efficiency to e.g., optimize the
transfection protocol or to pick positive clones for targeted assays
– Measurement of the expression level to quantify the abundance and distribution of labeled
molecules within a cell population
– Quantification of viral or bacterial infection
The measurement features include the total number of cells and number of positive cells, the
percentage of positive cells and the mean intensity of the transfection channel.
§ Automated Spot Detection
This Bio App provides a simple automated image analysis workflow customized to quantify
spots in the cell nuclei (e.g. for FISH, telomeres, centromeres, foci counting,...) and a tailored
result presentation with interactive measurement tables, heatmap and plots. The measure-
ment features include total number of spots, average number of spots per cell, mean intensity
of spots, mean area of nuclei, mean intensity of nuclei (nuclear stain).
§ Translocation
This Bio App provides a simple automated image analysis workflow customized to automati-
cally measure the translocation ratio between the cytoplasm and the nucleus and a tailored
result presentation with interactive measurement tables, heatmap and plots. The measure-
ment features include the total number of cells, the translocation ratio, as well as the mean
intensity values of the nucleus and the ring.
1. Create a new setting for the respective Bio App. For more information, see Creating a Gen-
eral Bio Apps Setting [} 700] and the respective descriptions for each individual Bio App.
This step has to be done only once per Bio App, or if you want or need another setting for
a particular Bio App.
à The created setting can be used for analysis of all suitable images.
2. Use the respective setting to run your Bio App for images you want to analyze, see Run-
ning Bio Apps [} 700] or Running Bio Apps in Batch Mode [} 707].
The images are analyzed and the results can be displayed in a specific results view.
Before you can use a Bio App, you have to create a setting for it.
Prerequisite ü You have opened an image which is typical for the analysis scenario of the respective Bio App.
1. On the Analysis tab, in the Bio Apps tool, click on the Bio App.
à Parameters are displayed.
2. For a new setting, click and select New. Alternatively, if you have a setting you want
to use it as a template, click and select New from Template.
3. Enter a name for the new setting and click . Alternatively, enter a name and press
Enter.
à A new analysis setting file is created.
You have created a general setting which can now be opened in the Bio Apps wizard to set up
the analysis for the respective Bio App. For more information, refer to the descriptions of the indi-
vidual Bio Apps, see Available Bio Apps [} 699].
See also
2 Bio Apps Wizard [} 712]
Info
Analyzing in Batch Mode
You can also setup the analysis of multiple images with Bio Apps and their settings in Batch
Mode, see Running Bio Apps in Batch Mode [} 707].
When you run a Bio App, the image analysis defined in the specific setting is applied to the image.
ü You have created a setting for the Bio App you want to use, see Creating a General Bio
Apps Setting [} 700].
1. On the Analysis tab, in the Bio Apps tool, click on the Bio App.
2. For Setting, select the setting you have created for this Bio App.
3. Click Run Analysis.
à The analysis runs with the selected Bio App and a result screen opens. On this result
screen you can see the analyzed picture, charts, and a table.
4. Click Finish to close the result screen.
You have successfully run your Bio App. The analysis results are saved in the image file and can be
displayed and exported with the Bio Apps view, see Bio Apps View [} 708].
Info
File Types
Bio Apps can only be applied to suitable image files. If you try to use them with an unsuitable
image type, a message is displayed. The following types of images are not supported for this
Bio App:
4 Z-Stacks
4 Unprocessed Airyscan data
4 Unprocessed Apotome data
4 Multi-phase images
4 Multi-block images
4 PSF images
Prerequisite ü You have opened an image which is typical for the analysis scenario.
ü You have set up a general setting for your Bio App, see Creating a General Bio Apps Setting
[} 700].
1. On the Analysis tab, in the Bio Apps tool, click on Cell Counting.
à The parameters are displayed.
2. Select your created setting as well as your image and click Create Setting.
à The Bio Apps wizard opens.
3. Enter a Name for the objects you want to segment (e.g. Nuclei).
4. In the channel control, select the channel which contains the necessary information for the
analysis (the channel in which the cell nuclei have been stained). For multi-channel images,
which contain one of the following channels (DAPI, Hoechst, To-pro-3, HCS Nuclear Mask
Deep Red), this channel is automatically preselected.
5. Select a Color for the resulting masks.
6. Select Manual if you want to define the objects manually by clicking in the image. Other-
wise, select Automatic. In the Manual mode, you can click on the objects you want to
segment and the lower and higher threshold values will be adapted automatically. Alterna-
tively you can directly enter a Threshold (lowest and highest value) or use the Histogram
for the pixel values used by the segmentation. For color images, you can set the threshold
for each color channel.
7. If you want to use machine learning for segmentation, click Semantic or Instance for se-
mantic or instance segmentation. The prerequisite for semantic segmentation is that you
have installed the 3rd party Python Tools during the installation of ZEN. For instance seg-
mentation, you need to have the Docker Desktop software running.
à A dropdown to select your model is displayed. It contains your own trained and imported
networks if they have been trained on one channel. For semantic segmentation, a default
neural network for the segmentation of fluorescently labeled nuclei is provided.
8. In the AI Model dropdown, select the model you want to use for segmentation.
9. Select the Model Class that should be used for segmentation and set a Min. Confidence.
For instance segmentation you also have to select the AI Model Version.
10. If you want to perform a rolling ball background subtraction, click On.
11. Set the lowest and highest value for the Area and Circularity measurement to filter out
unwanted objects.
à A result preview is displayed by the Image View. The unwanted objects are displayed in
white.
12. If you manually want to include certain objects, for Pick to Include click + and then click
on the object in the image.
à The values employed for the filters Area and Circularity are updated accordingly to in-
clude the selected object and any other objects that fulfil the newly adapted filter criteria.
13. Click Finish.
You have created a setting for Cell Counting which can now be used to analyze images by clicking
Run Analysis, see Running Bio Apps [} 700].
Info
File Types
Bio Apps can only be applied to suitable image files. If you try to use them with an unsuitable
image type, a message is displayed. The following types of images are not supported for this
Bio App:
4 Z-Stacks
4 Unprocessed Airyscan data
4 Unprocessed Apotome data
4 Multi-phase images
4 Multi-block images
4 PSF images
Prerequisite ü You have opened an image which is typical for the analysis scenario.
ü You have set up a general setting for your Bio App, see Creating a General Bio Apps Setting
[} 700].
1. On the Analysis tab, in the Bio Apps tool, click on Confluency.
à The parameters are displayed.
2. Select your created setting and click Create Setting.
à The Bio Apps wizard opens.
3. Enter a Name for the class you want to segment,
4. In the channel control, select the channel which contains the necessary information for the
analysis. For multi-channel images, which contain a channel out of the list (Bright, Oblique,
DIC and PGC), this channel is automatically preselected.
5. Select a Color for the resulting masks.
à Per default, the Segmentation Type is set to Manual to use a manual variance-based
segmentation.
6. Select if you want to segment the Structure in your image or the Background (this inverts
the applied threshold values).
7. Define the Threshold for the variance calculated from one pixel with the neighboring pix-
els and adjust the Kernel Size for this calculation.
8. If you want to use machine learning for segmentation, click AI-based (the prerequisite is
that you have installed the 3rd party Python Tools during the installation of ZEN).
à A dropdown to select your model is displayed. It contains your own trained and imported
networks if they have been trained on one channel.
9. In the AI Model dropdown, select the model you want to use for segmentation.
10. Select the Model Class that should be used for segmentation and set a Min. Confidence.
11. Set the Min. Object Size to define the minimum area in pixel that an object must have to
be segmented.
12. Activate Fill all holes if you want to close the holes in the segmented masks. Otherwise
leave Fill all holes deactivated.
13. Set the Min. Hole Size to define minimum area in pixel of the holes in the detected ob-
jects.
14. Click Finish.
You have created a setting for Confluency which can now be used to analyze images by clicking
Run Analysis, see Running Bio Apps [} 700].
Info
File Types
Bio Apps can only be applied to suitable image files. If you try to use them with an unsuitable
image type, a message is displayed. The following types of images are not supported for this
Bio App:
4 Z-Stacks
4 Unprocessed Airyscan data
4 Unprocessed Apotome data
4 Multi-phase images
4 Multi-block images
4 PSF images
Prerequisite ü You have opened an image with at least two channels that is typical for the analysis scenario.
ü You have set up a general setting for your Bio App, see Creating a General Bio Apps Setting
[} 700].
1. On the Analysis tab, in the Bio Apps tool, click on Gene- and Protein Expression.
à The parameters are displayed.
2. Select your created setting as well as your image and click Create Setting.
à The Bio Apps wizard opens.
3. Enter a Name for the segmented objects (the nuclei), select the Channel in which the nu-
clei are stained as well as a Color for the resulting masks.
4. Select Manual if you want to define the objects manually by clicking in the image. Other-
wise, select Automatic. In the Manual mode, you can click on the objects you want to
segment and the lower and higher threshold values will be adapted automatically. Alterna-
tively you can directly enter a Threshold (lowest and highest value) or use the Histogram
for the pixel values used by the segmentation. For color images, you can set the threshold
for each color channel.
5. If you want to use machine learning for segmentation, click Semantic or Instance for se-
mantic or instance segmentation. The prerequisite for semantic segmentation is that you
have installed the 3rd party Python Tools during the installation of ZEN. For instance seg-
mentation, you need to have the Docker Desktop software running.
à A dropdown to select your model is displayed. It contains your own trained and imported
networks if they have been trained on one channel. For semantic segmentation, a default
neural network for the segmentation of fluorescently labeled nuclei is provided.
6. In the AI Model dropdown, select the model you want to use for segmentation.
7. Select the Model Class that should be used for segmentation and set a Min. Confidence.
For instance segmentation you also have to select the AI Model Version.
8. If you want to perform a rolling ball background subtraction, click On.
9. Set the lowest and highest value for the Area and Circularity measurement to filter out
unwanted objects.
à A result preview is displayed by the Image View. The unwanted objects are displayed in
white.
10. If you manually want to include certain objects, for Pick to Include click + and then click
on the object in the image.
à The values employed for the filters Area and Circularity are updated accordingly to in-
clude the selected object and any other objects that fulfil the newly adapted filter criteria.
11. Set the distance between the boundary of the masks of the segmented nuclei and the rings
where the transfection is measured. A negative value means that the ring already begins in-
side of the nucleus. A positive value creates a ring that starts in a distance from the bound-
ary of the nuclei masks.
12. Set the width of the rings displayed in the image.
13. Click Next.
à The Gene Expression step opens.
14. Enter a Name for the transfected cells, select the Channel in which you want to measure
the transfection as well as a Color for the resulting masks.
15. Set the lowest and highest value for the mean intensity of the transfection channel, for
which the cells are counted as "positive".
16. If you manually want to include certain intensities, for Pick to Include click + and then
click on the area in the image.
à The values employed for Intensity Mean are updated accordingly.
17. Click Finish.
You have created a setting for Gene Expression which can now be used to analyze images by
clicking Run Analysis, see Running Bio Apps [} 700]. As a result, this settings calculates the
transfection efficiency, which can then be displayed in Bio Apps view after the run of the analy-
sis.
Info
File Types
Bio Apps can only be applied to suitable image files. If you try to use them with an unsuitable
image type, a message is displayed. The following types of images are not supported for this
Bio App:
4 Z-Stacks
4 Unprocessed Airyscan data
4 Unprocessed Apotome data
4 Multi-phase images
4 Multi-block images
4 PSF images
Prerequisite ü You have opened an image that is typical for the analysis scenario.
ü You have set up a general setting for your Bio App, see Creating a General Bio Apps Setting
[} 700].
1. On the Analysis tab, in the Bio Apps tool, click on Automated Spot Detection.
à The parameters are displayed.
2. Select your created setting as well as your image and click Create Setting.
à The Bio Apps wizard opens.
3. Enter a Name for the nuclei you want to segment, select the Channel which contains the
necessary information as well as a Color for the resulting masks.
4. For the segmentation of the nuclei, select Manual, if you want to define the objects manu-
ally by clicking in the image. Otherwise, select Automatic. In the Manual mode, you can
click on the objects you want to segment and the lower and higher threshold values will be
adapted automatically. Alternatively you can directly enter a Threshold (lowest and highest
value) or use the Histogram for the pixel values used by the segmentation. For color im-
ages, you can set the threshold for each color channel.
5. If you want to use machine learning for segmentation, click Semantic or Instance for se-
mantic or instance segmentation. The prerequisite for semantic segmentation is that you
have installed the 3rd party Python Tools during the installation of ZEN. For instance seg-
mentation, you need to have the Docker Desktop software running.
à A dropdown to select your model is displayed. It contains your own trained and imported
networks if they have been trained on one channel. For semantic segmentation, a default
neural network for the segmentation of fluorescently labeled nuclei is provided.
6. In the AI Model dropdown, select the model you want to use for segmentation.
7. Select the Model Class that should be used for segmentation and set a Min. Confidence.
For instance segmentation you also have to select the AI Model Version.
8. If you want to perform a rolling ball background subtraction, click On.
9. Set the lowest and highest value for the Area and Circularity measurement to filter out
unwanted objects.
à A result preview is displayed by the Image View. The unwanted objects are displayed in
white.
10. If you manually want to include certain objects, for Pick to Include click + and the click on
the object in the image.
à The values employed for the filters Area and Circularity are updated accordingly to in-
clude the selected object and any other objects that fulfil the newly adapted filter criteria.
11. Set the distance between the boundary of the masks of the segmented nuclei and the rings
where the spots should be detected. A negative value means that the ring already begins
inside of the nucleus. A positive value creates a ring that starts in a distance from the
boundary of the nuclei masks.
12. Set the width of the rings displayed in the image.
13. Click Next.
à The second step of the wizard opens.
14. Enter a Name for the spots, select the Channel with which the spots are stained as well as
a Color for the resulting masks.
15. For the segmentation of the nuclei, select Manual, if you want to define the objects manu-
ally by clicking in the image. Otherwise, select Automatic. In the Manual mode, you can
click on the objects you want to segment and the lower and higher threshold values will be
adapted automatically. Alternatively you can directly enter a Threshold (Low and High) or
use the Histogram for the pixel values used by the segmentation. For color images, you
can set the threshold for each color channel.
16. If you want to use machine learning for segmentation, click Semantic or Instance for se-
mantic or instance segmentation.
à A dropdown to select your model is displayed.
17. In the AI Model dropdown, select the model you want to use for segmentation.
18. Select the Model Class that should be used for segmentation and set a Min. Confidence.
For instance segmentation you also have to select the AI Model Version.
à A result preview is displayed by the Image View.
à For the Spots step, a rolling ball background subtraction is automatically enabled.
19. If you do not want to perform a rolling ball background subtraction, click Off.
20. Set the lowest and highest value for the Area and Circularity measurement to filter out
unwanted objects.
à A result preview is displayed by the Image View. The unwanted objects are displayed in
white.
21. If you manually want to include certain objects, for Pick to Include click + and then click
on the object in the image.
à The values employed for the filters Area and Circularity are updated accordingly to in-
clude the selected object and any other objects that fulfil the newly adapted filter criteria.
22. Click Finish.
You have created a setting for Spot Detection which can now be used to analyze images by click-
ing Run Analysis, see Running Bio Apps [} 700].
Info
File Types
Bio Apps can only be applied to suitable image files. If you try to use them with an unsuitable
image type, a message is displayed. The following types of images are not supported for this
Bio App:
4 Z-Stacks
4 Unprocessed Airyscan data
4 Unprocessed Apotome data
4 Multi-phase images
4 Multi-block images
4 PSF images
Prerequisite ü You have opened a multichannel image which is typical for the analysis scenario.
ü You have set up a general setting for your Bio App, see Creating a General Bio Apps Setting
[} 700].
1. On the Analysis tab, in the Bio Apps tool, click on Translocation.
à The parameters are displayed.
2. Select your created setting as well as your image and click Create Setting.
à The Bio Apps wizard opens.
3. Enter a Name for the segmented objects (the nuclei), select the Channel in which the nu-
clei are stained as well as a Color for the resulting masks.
4. Select Manual if you want to define the objects manually by clicking in the image. Other-
wise, select Automatic. In the Manual mode, you can click on the objects you want to
segment and the lower and higher threshold values will be adapted automatically. Alterna-
tively you can directly enter a Threshold (lowest and highest value) or use the Histogram
for the pixel values used by the segmentation. For color images, you can set the threshold
for each color channel.
5. If you want to use machine learning for segmentation, click Semantic or Instance for se-
mantic or instance segmentation. The prerequisite for semantic segmentation is that you
have installed the 3rd party Python Tools during the installation of ZEN. For instance seg-
mentation, you need to have the Docker Desktop software running.
à A dropdown to select your model is displayed. It contains your own trained and imported
networks if they have been trained on one channel. For semantic segmentation, a default
neural network for the segmentation of fluorescently labeled nuclei is provided.
6. In the AI Model dropdown, select the model you want to use for segmentation.
7. Select the Model Class that should be used for segmentation and set a Min. Confidence.
For instance segmentation you also have to select the AI Model Version.
8. If you want to perform a rolling ball background subtraction, click On.
9. Set the lowest and highest value for the Area and Circularity measurement to filter out
unwanted objects.
à A result preview is displayed by the image view. The unwanted objects are displayed in
white.
10. If you manually want to include certain objects, for Pick to Include click + and then click
on the object in the image.
à The values employed for the filters Area and Circularity are updated accordingly to in-
clude the selected object and any other objects that fulfil the newly adapted filter criteria.
11. Select the Translocation Channel.
12. Set the distance between the boundary of the masks of the segmented nuclei and the rings
where the translocation is measured. A positive value creates a ring that starts in a distance
from the boundary of the nuclei masks.
13. Set the width of the rings.
14. Click Finish.
You have created a setting for translocation which can now be used to analyze images by clicking
Run Analysis, see Running Bio Apps [} 700].
2. Click +Add.
à A file browser opens.
3. Select an image you want to analyze and click Open.
à The image is added to the list in the Batch Processing view.
4. Repeat the previous steps until all images you want to analyze are opened in the Batch
Processing view. You can also select multiple images by pressing the Ctrl button while se-
lecting.
à All images are displayed in the list.
5. In the list of the Batch Processing view, select an image.
6. In the Batch Method tool, select Bio Apps.
à Parameters are available for the image.
7. In the Parameters tool, select the Bio App and the Setting you want to use for the im-
age.
à You have set the Bio App analysis for the selected image.
8. Repeat the previous steps until you set the analysis for all opened images. If you want to
analyze multiple images with the same Bio App and setting, you can also select multiple im-
ages by pressing the Ctrl button, or all images by pressing Ctrl + A.
à You have set the Bio App analysis for all images.
9. Click Apply. Alternatively, if you want to run the analysis for only some of the images, se-
lect the images in the list and click Run Selected.
à The images are analyzed with the respective Bio Apps and settings.
à The analyzed images are saved in the defined output folder. The results can be displayed
and exported if you open the resulting images in ZEN and use the Bio Apps view, see
Exporting Results of the Bio Apps Analysis [} 708].
Prerequisite ü You have analyzed an image with a Bio App, see Running Bio Apps [} 700] or Running Bio
Apps in Batch Mode [} 707].
ü You have opened the results of the analysis in the Bio Apps view.
1. Set the displayed information (chart, chart axis, table, etc.) in the Bio Apps view according
to your needs. The export uses the currently displayed information.
2. In the Export tab, activate the checkboxes for all the information you want to export and
select the corresponding format for each with the dropdown lists.
3. Click Export.
à A file browser opens.
4. Name the file for export, navigate to the folder where the results should be exported to
and click Save.
This view is only available for images which have been analyzed by a Bio App and if you have the
license for the Bio Apps toolkit. Here you can see the result of the image analysis conducted by
the Bio App and you have a table and plot section to see the result data of the analysis. The infor-
mation displayed by the plots and table is specific for each Bio App.
1 Image View
Displays the currently selected image of your analyzed image document as well as the
masking objects of the analysis (depending on the settings done in the Objects tab).
2 Result Chart
Displays the chart for the analysis results of the Bio App. The chart type and the Mea-
surement features displayed on the axis can be set in the Chart view options tab.
3 Result Table
Displays the table with the results of the Bio App analysis.
4 View Options
Displays general view options as well as Bio App specific options.
See also
2 Bio Apps [} 699]
This tab enables you to export various results of your Bio Apps analysis.
Parameter Description
Image Activated: Selects the current result image for export with the format
selected in the dropdown.
Table Activated: Selects the currently displayed table for export with the
format selected in the dropdown.
Chart Activated: Selects the currently displayed chart for export with the
format selected in the dropdown. The resolution for the exported
charts is 300 dpi.
Parameter Description
– Width (pixel) Sets the width for the exported chart in pixel.
– Height (pixel) Sets the height for the exported chart in pixel.
Processing Info Activated: Selects the current processing information for export with
the format selected in the dropdown.
Export Opens the file browser to export all the selected result documents.
See also
2 Bio Apps View [} 708]
2 Exporting Results of the Bio Apps Analysis [} 708]
Parameter Description
Show Objects Activated: Displays the resulting mask of the analysis in the image.
Object Display Sets how the masks are displayed in the image.
Region Class Drop- Selects the region class for which the chart and table is displayed.
down
See also
2 Bio Apps View [} 708]
2 Bio Apps Wizard [} 712]
Parameter Description
Data Selects if the displayed results in the table and chart are for the entire
carrier, or a single scene.
– Single Scene Displays the results for a single scene. The chart can be toggled be-
tween histogram and xy-plot.
Parameter Description
Chart Type Selects which chart type is displayed.
– Heatmap (Sin- Only visible if Entire Carrier is selected and the image has multiple
gle Time time points.
Point) Displays a heatmap for the current time point. To go to another time
point, use the Time control on the Dimensions tab.
– Line (Total Only visible if Entire Carrier is selected and the image has multiple
Time Range) time points.
Displays a line chart with the data of the total time range.
– Fit to Whole Fits the range to the data of the whole image.
Data Set
– Fit to Single Fits the range to the data of the current time point.
Time Point
X-Axis Only visible if Single Scene or Line (Total Time Range) is selected.
Selects which value is used for the x-axis of the chart.
Y-Axis Only visible if Single Scene and XY-Plot, or Line (Total Time
Range) is selected.
Selects which measurement feature is used for the y-axis of the chart.
This tool enables you to see and start the different available Bio Apps.
Parameter Description
Recent Displays recently used Bio Apps.
– New Creates a new analysis setting. Enter a name for the setting.
Parameter Description
– Edit Opens the Bio Apps wizard to edit the setting, see Bio Apps Wizard
[} 712].
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
– Save As Saves the current setting under a new name. Enter a name for the
setting.
Input Displays the currently opened images in ZEN which serve as input for
the Bio App.
Create Setting Only visible if you have selected a Bio App and if you have created a
new setting.
Opens the wizard of the selected Bio App which allows you to modify
the parameters of the analysis setting.
See also
2 Bio Apps [} 699]
2 Available Bio Apps [} 699]
2 General Workflow [} 700]
In this wizard you define the settings for your Bio Apps. In this wizard you have your image, the
settings for the Bio App, basic view options as in the 2D view and an additional Legends win-
dow. This window displays information about the individual regions in the image and can be tog-
gled on and off with the right click menu entry Regions Legend.
The options and parameters shown here in the wizard are, for the most part, Bio App specific.
The following parameters are generally available:
Parameter Description
Name Sets the name for the class/objects that is analyzed by this Bio App.
Color Selects a color for the resulting masks. Note: Use a different color
than the channel color to be able to differentiate between mask and
measurement signal.
See also
Parameter Description
Segm. Method Selects the method for segmentation.
– Automatic Uses threshold values that are determined automatically from the his-
togram based on the Otsu method. For all possible threshold values,
the Otsu method calculates the variance of intensities on each side of
the respective threshold. It minimizes the sum of the variances for the
background and the foreground.
– Manual Sets the threshold manually by clicking on the regions in the image
that you want to segment, or by using the Threshold control dis-
played below.
– Low Defines the lowest pixel intensity considered for the segmentation.
Parameter Description
Histogram Only visible if Manual is selected.
In the histogram you can change the lower and upper threshold value
by dragging the lower or upper adjustment handle or shift the entire
highlighted area between the lower and upper threshold value.
BG Subtraction
Area Sets the lowest and highest value for the area of the objects. For
more information, see the list of Measurement Features [} 440].
Circularity Sets the lowest and highest value for the roundness of the objects.
For more information, see the list of Measurement Features [} 440].
+ Enables you to expand the values employed for the region filters (area
Pick to Include and circularity) by clicking on objects in the image.
See also
2 Downloading AI Models [} 71]
2 Creating a Setting for Cell Counting [} 701]
Parameter Description
Segmentation Selects the type of segmentation.
Type
– Manual Uses a manual threshold by clicking on the regions in the image that
you want to segment, or by using the Threshold control displayed
below.
Parameter Description
Kernel Size Only visible if Manual is selected.
Sets the kernel size to calculate the variance value of one pixel with
the neighboring pixels.
Min. Object Size Sets the minimum size in pixel that an object must have to be seg-
mented.
– Off Fills holes in the segmented objects only if they are smaller than the
specified Min. Hole Area.
Min. Hole Size Sets the minimum area in pixel for the holes in the detected objects.
The input is synchronized with Min. Object Size, which cannot be
smaller than Min. Hole Size.
See also
2 Creating a Setting for Confluency [} 702]
For Gene- and Protein Expression you have to specify settings for the nuclei segmentation as
well as for quantification of the gene expression in the respective step.
Parameter Description
Segm. Method Only visible in the Nuclei step.
Selects the method for segmentation.
– Automatic Uses threshold values that are determined automatically from the his-
togram based on the Otsu method. For all possible threshold values,
the Otsu method calculates the variance of intensities on each side of
the respective threshold. It minimizes the sum of the variances for the
background and the foreground.
– Manual Sets the threshold manually by clicking on the regions in the image
that you want to segment, or by using the Threshold control dis-
played below.
Parameter Description
– Instance Uses an AI model for instance segmentation to automatically segment
(fluorescently labeled) cell nuclei. The AI models for instance segmen-
tation need the software Docker Desktop to run.
– Low Defines the lowest pixel intensity considered for the segmentation.
Pick to Segment Only visible in the Nuclei step and if Manual is selected.
BG Subtraction
Pick to Include Enables you to expand the values employed for the region filters (area
+ and circularity) and the mean intensity in the second step by clicking
on objects in the image.
Parameter Description
Ring Distance Only available in the Nuclei step.
Sets the distance between the inner border of the ring and the border
of the segmented nuclei.
Intensity Mean Only available in the Gene- and Protein Expression step.
Sets the lowest and highest value for the mean intensity measurement
of the selected channel. For more information, see the list of Mea-
surement Features [} 440]. If the mean intensity of a cell falls into
that defined range, it is considered as a positive cell.
See also
2 Downloading AI Models [} 71]
2 Creating a Setting for Gene- and Protein Expression [} 703]
For Automated Spot Detection you have to specify settings for the nuclei segmentation as well
as the spot detection itself in the respective step.
Parameter Description
Segm. Method Selects the method for segmentation.
– Automatic Uses threshold values that are determined automatically from the his-
togram based on the Otsu method. For all possible threshold values,
the Otsu method calculates the variance of intensities on each side of
the respective threshold. It minimizes the sum of the variances for the
background and the foreground.
– Manual Sets the threshold manually by clicking on the regions in the image
that you want to segment, or by using the Threshold control dis-
played below.
Parameter Description
Min. Confidence Only visible if Semantic or Instance is selected.
Sets the minimum confidence in % for the prediction of every object.
– Low Defines the lowest pixel intensity considered for the segmentation.
BG Subtraction
Area Sets the lowest and highest value for the area of the objects. For
more information, see the list of Measurement Features [} 440].
Circularity Sets the lowest and highest value for the roundness of the objects.
For more information, see the list of Measurement Features [} 440].
+ Enables you to expand the values employed for the region filters (area
Pick to Include and circularity) by clicking on objects in the image.
See also
2 Downloading AI Models [} 71]
2 Creating a Setting for Automated Spot Detection [} 705]
Parameter Description
Segm. Method Selects the method for segmentation.
– Automatic Uses threshold values that are determined automatically from the his-
togram based on the Otsu method. For all possible threshold values,
the Otsu method calculates the variance of intensities on each side of
the respective threshold. It minimizes the sum of the variances for the
background and the foreground.
– Manual Sets the threshold manually by clicking on the regions in the image
that you want to segment, or by using the Threshold control dis-
played below.
– Low Defines the lowest pixel intensity considered for the segmentation.
Parameter Description
BG Subtraction Only visible if Automatic or Manual is selected.
Area Sets the lowest and highest value for the area of the objects. For
more information, see the list of Measurement Features [} 440].
Circularity Sets the lowest and highest value for the roundness of the objects.
For more information, see the list of Measurement Features [} 440].
Pick to Include Enables you to expand the values employed for the region filters (area
+ and circularity) and the mean intensity in the second step by clicking
on objects in the image.
Ring Distance Sets the distance between the inner border of the ring and the border
of the segmented nuclei.
Ring Width Sets the width of the ring where the translocation is measured.
See also
2 Creating a Setting for Translocation [} 706]
§ CAT [} 722]
§ Dynamics Profiler [} 770] (Basic & Advanced)
§ EM Processing Toolbox [} 793]
Airyscan RAW Data (available for LSM 900 and LSM 980) is part of the Airyscan processing
functionality and allows you to save and export the Airyscan data in a format which allows best
compatibility to third party and self programed processing tools. If you have licensed the
Airyscan processing functionality, the function to get the original 32ch raw data is also available.
Per default, Airyscan data is stored in a 4ch format, which allows faster processing. Also the Shep-
pard sum export is available in this format.
If you want to acquire and save the data in the 32ch Airyscan format, do the following:
See also
2 Processing Tab [} 873]
This module enables ZEN lite to receive microscope settings from coded manual instruments, dis-
play the information in ZEN lite and save it in the image files. Moreover, Coded Microscope en-
ables illumination control from ZEN lite.
This module enables you to perform automated imaging of ultra-thin serial sections (ribbons) us-
ing the light- and scanning electron microscope. After calibration of the sample carrier and detec-
tion of the sections, regions of interest can be defined manually in a single section that will be au-
tomatically propagated to all sections. The selected regions of interest can then be imaged with
different contrast methods and magnifications using the LM.
In the SEM the previously defined regions of interest will then be imaged automatically after load-
ing the image previously acquired at the LM. The corresponding 2D image sequences recorded by
the LM and SEM are aligned into a 3D Z-Stack using the integrated alignment and correlation al-
gorithms of the ZEN Correlative Array Tomography module. This process results in a correlative 3D
data set combining LM and SEM information into one image volume.
For the correlative workflow, one CAT module has to be installed on the widefield system, a sec-
ond module has to be installed on the SEM. A detailed how-to guide of the workflow can be
found in the chapter The CAT Workflow [} 722].
The software module can be used with ZEISS widefield microscopes as well as with ZEISS scan-
ning electron microscopes. In addition, to the CAT tool the module offers four wizards. Detailed
descriptions of the functions of the tools and wizards can be found in the linked chapters.
Before you can work with the CAT module, you have to check the following settings on the light
microscope system (hardware and software settings). In general, the system is calibrated by a ser-
vice technician but we recommend checking the settings again, especially when you have
changed components, e.g. objectives or filter cubes. As these general settings are not described
here in detail, please ask your service technician, or read the ZEN Online Help.
For working with the CAT module you need to set up an experiment in the ZEN software first. As
this is already described in the ZEN Online Help, we will focus here on the most important settings
which are essential for the CAT workflow:
§ In the menu Tools > Options > Acquisition > Acquisition tab the checkbox Enable Ad-
vanced Imaging Setup must be activated.
In this chapter you will find how-to guides describing the typical CAT workflow. The chapter is for
users who search for an introduction to the CAT module and workflow. Starting from general
preparations to the acquisition on the LM (light microscope), we will also explain how to acquire
images with the SEM (Scanning Electron Microscope). After the image acquisition we will focus
on the image alignment and correlation.
Please note that we will not explain how to set up an experiment in detail as this step is beyond
the scope of this guide which is focused mainly on the CAT workflow. Instead of that please read
the chapter Sample Preparation [} 723], where we describe the most important pre-requisites
for a CAT experiment. We will not take a look at the further processing of the resulting images as
well.
See also
2 Acquiring the LM image [} 729]
2 Acquiring the SEM Image [} 740]
If you have configured your experiment in ZEN (e.g. a multi-channel experiment), the next step is
to create and select a sample. When you work with the software for the first time you, have to
create a new sample first.
1. Click on Select/Specify.
à The Select Sample dialog opens.
Info
Note that specifying the correct number of sample carriers is important for the numbering of
the ribbons/sections afterwards. The sample information will be stored within the image data
and will be used for further image processing and data management.
1. In the Sample Holder section click on Select… to open the Select Template dialog and
to choose the correlative sample holder you want to use. Different types of correlative
holders are available, see Appendix Correlative Sample Holders [} 767].
2. In the Select Template dialog select the correlative holder you want to use. If you want to
use your own sample holders, click on the Add button below the list and follow the
instructions in the chapter Defining new sample holder templates [} 726].
With this dialog you can define new correlative holders in addition to the existing holder tem-
plates. It is not mandatory to use correlative holders from ZEISS. User-defined correlative holders
with 3 fiducial markers can be used as well.
1. To open the dialog click on Add in the Select Template dialog. This dialog can be
opened via the Shuttle & Find tool.
à The New Template dialog opens.
2. Type in a name for the new holder or sample carrier. An image of the new holder can be
loaded as well.
3. Insert the distances (in millimeters) between the first and the second marker and between
the second and third marker.
à The distances can be determined using the Stage Control dialog accessible via the Light
Path tool in Right Tool Area tab. We recommend doing this before starting the New
template dialog. Write down the distances to be prepared to enter them within the New
Template dialog.
à Activate the live view in the Center Screen Area by clicking on the Live button in the Lo-
cate tab.
à Navigate the stage manually to the calibration marker on the sample holder by means of
the joystick and note the x/y-coordinates of the marker.
à Repeat this procedure for all three markers and calculate the distances between marker 1
and marker 2 and between marker 2 and marker 3, respectively.
Correlative sample holders have three fiducial markers enabling a three point calibration (signed
with the numbers 1-2-3) The calibration markers consist of one small (length 50 µm) and a large
L-shape marker (length 1 mm). The bigger marker is used for coarse orientation, whereas the
smaller marker is used for the calibration.
1. Click on Live in the Acquisition tab to activate the live view in the Center Screen Area.
2. Navigate the stage manually to the first calibration marker on the sample holder (marked
with No. 1) by means of the joystick. It is enough if you move the stage to the larger L-
shaped calibration marker. The smaller marker will be detected automatically within the
Sample Holder Calibration Wizard. To locate the marker positions we recommend using
a dry objective with low magnification (5x – 20x).
à After setting marker position 3 you will find a green check mark icon which shows that
the calibration was successful.
The image acquisition will be performed by the help of the Acquisition Wizard which is opened
if you click on Start Acquisition Wizard in the Correlative Array Tomography tool.
The wizard contains the following 7 steps:
Info
Take care that the Auto checkbox on the Dimensions tab is deactivated.
Prerequisite ü You have started the Acquisition Wizard via the CAT tool.
ü You are in step 1/7 Overview Imaging.
1. Check if Image Acquisition mode is selected. This is the default setting when entering the
wizard.
2. From the Experiment dropdown list select the experiment that you have prepared in ad-
vance.
3. From the Objective list select an objective with a low magnification, e.g. 5x.
4. Select the Channel and the Light Source you want to use for acquiring the overview im-
age. For the overview image we recommend selecting Phase contrast as channel mode.
5. Move the stage to the upper left corner of your sample.
6. Click on Set start position to define the starting position of the overview image.
7. Move the stage to the bottom right corner of your sample.
8. Click on Set end position to define the end position of the overview image.
9. Click on Acquire Overview Image.
à The overview image will be acquired. Then you should see the complete sample showing
all ribbons you want to image.
10. Click on Apply Stitching to remove the offset between the single tile images.
You have successfully acquired the overview image. You can now continue with the next step by
clicking on Next.
Info
When no detailed sample information is necessary to identify regions of interest within the
sections, you can skip this step and the following step 3 Ribbon imaging as well. You can then
go on with the wizard step 4 Section specification [} 733].
Prerequisite ü You are in step 2/7 Ribbon Definition of the Acquisition Wizard.
1. Use the tools on the Ribbon Definition tab to mark the contour lines of the ribbons which
should be imaged. The contour lines are displayed in yellow color.
à The software will automatically create as many tiles as necessary for imaging the ribbons.
The number of the tiles depend on the selected objective. The frames of the tiles will be
displayed in red color.
Please note that this is an optional step and must be performed only when you have defined rib-
bons as described in step 2. In summary, you have to perform the same actions mentioned in step
1 but you should use an objective with higher magnification and apply the Global focus strategy
under Focus Surface.
à The support points will be distributed automatically over the ribbons. They are displayed
as yellow circles with a point in the middle.
4. If required, you can add further support points by using the Add button below the
Distribute Support Points button. Simply click on the image at the position where you
would like to add another support point.
5. Click on Verify Support Points.
à Now you can check if each support point is
in focus. You will see the overview image in
the right image container and the detail im-
age in the left image container. The verifica-
tion process will start with the first support
point which was set. The current support
point is marked with a red crosshair. When
you activate Show stage position within
the image on the Ribbon Definition tab
below the Center Screen Area you will see
the current position of the stage in the im-
age as a rectangle with a blue dashed frame.
6. Hold CTRL on your keyboard and use the mouse wheel to adjust the focus for the corre-
sponding support point.
7. When the support point is in focus click on Confirm.
à The software will automatically move to the next support point.
8. Repeat the last two steps until you have corrected and verified all support points. At the
end of the process you will see the message All points have been verified.
In this step all sections will be identified by using a section detection algorithm. In summary, you
have to mark the outline of at least one section on each ribbon. Then the section detection algo-
rithm will detect the sections of the ribbon automatically. If the automatic section detection does
not work properly or if not, all sections are detected, you can stamp in the missing sections. It is
also possible to edit the shape, location and orientation of the section frames afterwards.
2. Mark the outline of one section in each ribbon. The outlines of these reference contours are
displayed in orange color.
3. Click on Apply.
à The software will try to detect the remaining sections automatically. When finished the
detected sections appear in green color.
4. If not all, sections can be detected, mark the last section which was detected and click on
the Stamp tool in the Section Definition tab.
5. Stamp in the missing sections so that each section is marked.
à Please take your time to check the numbering carefully. A correct numbering is prerequi-
site for a successful alignment of the sections, afterwards. The numbering of the ribbons
depends on how you deposited the ribbons during the cutting. To adjust the numbering
you have several options available in the context menu. To open the context menu move
the cursor over a section and right-click with the mouse.
6. If the numbering is correct, proceed to the next wizard step by clicking on Next.
2. Mark the desired ROIs in one section. Marked ROIs will be displayed in purple color.
3. Click on Apply.
à The software will position the defined region of interests in each section according to
section contours. The detected ROIs then appear on each section of the ribbons.
If the ROIs are detected correctly, proceed with the next wizard step by clicking on Next.
In this step we will image the ROIs using a high magnification objective and apply a local focus
strategy. This will result in very detailed and sharp images of the ROIs which are used for the fur-
ther processing (e.g. creating Z-Stacks and image correlation with SEM images).
à According to the step 3 Ribbon Imaging the support points will be distributed automat-
ically. The support points are distributed alternately outside and inside a ROI to guaran-
tee best focusing results.
This step is basically used for re-acquiring images from ROIs that are out of focus.
You have successfully completed the Acquisition Wizard for the light microscope images of your
ribbons. Continue with the process described in the next chapter of this guide.
Prerequisite ü You have acquired the LM image according to the instructions in the chapter Acquiring the
LM image [} 729].
ü You have copied/transferred the image data of the light microscope to the SEM PC.
1. Start the Smart SEM software. Note that the SEM software is used for setting up the ac-
quisition parameters, e.g. detector settings, magnification, display settings and scan speed.
2. Start the ZEN SEM software. Please take care that before you start ZEN, SmartSEM was
started.
à You will see the SEM Acquisition tab and the Correlative Array Tomography (CAT)
tool.
Prerequisite ü You have done the general preparations, see Preparations for the SEM Image [} 740].
1. In the CAT tool click on Start Acquisition Wizard.
à You will see the first step Overview Imaging of the wizard.
2. Click on Load Image.
3. Select the image file containing the ROIs from the CAT/LM folder on your file system.
4. Click on Next.
à The wizard will now jump directly to step 6/7 ROI Imaging as the software recognizes
the marked ROIs in the image file.
à Before starting to acquire the ROIs we recommend to perform the offset correction.
Therefore proceed as follows:
5. In the ROI image click on a position with a prominent structure.
à The stage will move to that position automatically. Now you may recognize a difference
between the position you have clicked on and the actual position. This is the offset we
want to correct now.
6. Activate the offset correction checkbox.
à The 4 correction points are distributed automatically within the image. The correction
points look like the support points (red outline with a red dot in the middle).
8. Move the correction points to prominent positions on the sample containing structures
which are easy to recognize, e.g. the corner of the ribbons.
9. Click on Set correction offset. Note that for setting the offset correction you should use
the same magnification which will be used for image acquisition later.
à The stage moves to the first correction point.
à In the left image you will see the SEM image position. In the right image you will see the
correction point on the LM image. The positions do not match exactly.
10. Move the SEM stage so that its position will match the correction point in the LM image.
§ Image Import
§ Pre-Processing
§ Image Review
§ Alignment
§ Manual Correction
§ Final Image Creation
1. In the CAT tool click on Start Z-Stack Alignment Wizard.
à You will see the first step Image Import of the wizard.
2. Click on Load Image and select the acquired Z-Stack image from the file system. In our ex-
ample we choose the LM image. The same process has to be performed for the SEM image
afterwards.
à You will see the Z-Stack image in the center screen area.
3. Click on Next.
à You will see step 2/6 Pre-Processing.
4. If your image is a tile image, click on Apply Stitching to correct the offset between the in-
dividual tiles.
5. If your image is a SEM image, click on Histogram Equalization to adjust the image dis-
play.
6. Click on Next.
à You will see step 3/6 Image Review. Note that in this step no image acquisition is possi-
ble. You can just replace an image by the next or previous image of the Z-Stack.
7. Select the image to be replaced by clicking on it with the mouse and click on Replace with
next or Replace with previous. Alternatively, you can press the N or the P key. Note that
the table of the replaced image will not be saved.
8. Click on Next.
9. If you have acquired a multi-channel image, select the reference channel from the Channel
list.
10. Click on Start Alignment.
à The alignment of the Z-Stack image will be performed automatically. After the alignment
you will see the original Z-Stack image in the left image and in the right image you will
see the aligned Z-Stack image.
11. Click on Next.
à You will see step 5/6 Manual Correction.
à If you browse through the Z-Stack by using the Z-Position slider and you still realize a
shift between the single Z-Stack images, you can perform a manual correction of the sin-
gle Z-Stack images. Therefore continue as follows:
12. Click on Define direction.
à A red arrow will appear in the right and in the left image.
13. Place the arrow in the left image at a prominent structure in the image which is easy to rec-
ognize through the full Z-Stack.
14. Select the right image and browse through the Z-Stack by using the Z-Slider on the Dimen-
sions tab.
15. When you realize a shift in an image, adjust the arrow in the right image so that it matches
with the prominent structure marked with the arrow in the left image. Note that you have
to check and adjust the arrow for each image of the Z-Stack which does not match the po-
sition.
You have successfully aligned and created the Z-Stack image. Of course you have to repeat the
process for the SEM image that was acquired.
§ Import Z-Stacks
§ Correlation
§ Manual Correction
§ Create Final Correlation Image
1. In the CAT tool click on Start Correlation Wizard.
à You will see the first step Import Z-Stacks.
2. Click on Left Container to load the aligned Z-Stack image from the SEM.
3. Click on Right Container to load the aligned Z-Stack image from the LM.
4. Click on Next.
à You will see step 2/4 Correlation.
5. Under Transform decide whether you want to transform the Left Z-Stack into the Right
Z-Stack or vice versa.
6. Under Mode select 4 Points.
7. Click on Set Points.
8. Set the first 3 correlation points in the first Z-Stack image of the left image.
9. Set the corresponding 3 correlation points in the first Z-Stack image of the right image.
à After the third point in the right image was set, the software automatically jumps to the
last Z-Stack image.
10. Set the fourth correlation point in the left image first and then set it in the right image.
13. In this step you can manually correct the alignment of the images by moving and/or rotat-
ing the images according to each other. To rotate the image use the handle at the top of
the image frame. To move the image simply left click on the image and hold the mouse
button pressed while moving the image.
14. If you finished the alignment of an image click on Accept. You can browse through the
correlated Z-Stack images by using the Z-Position slider in the Dimension tab.
15. Click on Next.
à You will see step 4/4 Create Final Correlation Image.
16. Click on Create Final Z-Stack.
à The correlated Z-Stack image will be created.
Using this tool you can calibrate and manage the sample holders and start the wizards which are
used for acquiring images from serial sections, generating Z-stack images out of the single images
and correlate two Z-Stack images from the light microscope (LM) and the scanning electron mi-
croscope (SEM).
Parameter Description
Sample Name Displays the name of the sample.
Select... Opens the Select Template dialog. There you select the preferred
sample holder or define new holder templates, see Selecting the sam-
ple holder.
Calibrate... Opens the Sample Holder Calibration Wizard [} 753]. There you can
calibrate the selected sample holder.
Start Acquisition Starts the Acquisition Wizard. For more information see Acquisition
Wizard Wizard [} 756].
Start Z-Stack Starts the Z-Stack Alignment Wizard. For more information, see Z-
Alignment Wizard Stack Alignment Wizard [} 763].
Start Correlation Starts the Correlation Wizard. For more information, see Correla-
Wizard tion Wizard [} 765].
Parameter Description
Folder Shows the location, where the files are saved.
If you click on the button, you can change the storage location.
The default path and folder is
C:\Users\user\Pictures\CAT Samples.
Within this folder, each sample is saved in a sub-folder. Images taken
during image acquisition within the CAT Acquisition Wizard will be
saved within the sub-folder automatically.
Parameter Description
For a better clarity sub-folders with the name „[Date][Time]“ will be
generated when another CAT run is started within the CAT acquisition
wizard. In case the file name will exceed a certain number of charac-
ters the name will be shortened using the sign „°“.
List of specified Shows the samples which are already specified within the software.
samples
If you select a sample in the list and click OK, the sample will be used
in your experiment.
Information Shows the specified sample information, e.g. name, description, num-
ber of sample carriers, sample carrier, and section thickness.
See also
2 CAT Tool [} 752]
With the Sample Holder Calibration Wizard you calibrate the selected correlative sample holder.
Make sure that you have selected the desired sample holder, see Selecting the sample holder.
Option Description
Save marker images Activated: the marker images are saved during the calibration.
The images can be used to check the calibration afterwards. Click
on the Select Folder (...) button to select a storage folder.
Move the stage to load Activated: the stage will move to load position before moving
position before x/y to the next correlative calibration marker.
movement
In case of using an AxioObserver, the objective revolver moves to
load position.
Automatic movement Activated: By clicking on the Next button, within the wizard the
to next marker stage moves automatically to the next calibration marker.
Deactivated: You must use the joystick to navigate to the mark-
ers. This is necessary when using a correlative holder which has
no holder data deposited.
Use Autofocus at each This option is active only if the Automatic movement to next
marker position marker position checkbox is activated.
Activated: the focus is adjusted automatically after moving to
the next marker position.
Use automatic marker Activated: The software will detect the small calibration marker
detection automatically.
Use settings for marker This option is active only if the Use automatic marker detec-
detection tion checkbox is activated.
Option Description
Activated: shows settings for marker detection (see description
below). Here you select the properties of the calibration markers.
Option Description
Threshold marker de- A low threshold for marker detection is used when the dimen-
tection: high – low sions of the correlative L markers cannot be recognized precisely,
e.g. when the sample holder is slightly filthy.
Marker color Here you select the color of the markers displayed in the live im-
age.
White: the marker is displayed white on a dark background.
Black: the marker is displayed dark on light background.
Auto: the marker color is set automatically.
Marker orientation Here you need to set the orientation of the L-markers on your
sample holder. Click on the corresponding button to select the
orientation of the calibration marker which you can see in the
live image
If you click on the Next button, you will move to the next step of the wizard.
In steps 2-4 of the wizard you will be guided through the calibration procedure.
Option Function
Holder position Move to Position 1 button
Option Function
Moves the stage to marker position 1. This is possible only if the
first position was set before and x/y coordinates are given.
Current button
Only visible for marker position 2 and 3.
Moves the stage to the current marker position. This is possible
only if the current position was set before and x/y coordinates
are given.
Stage movement to the Here you can change the movement of the stage in x or y direc-
next marker tion. This is necessary if during calibration the stage moves in the
wrong direction.
Marker position By clicking on the Set button, the actual marker position will be
confirmed.
This wizard is used to image the serial sections or user-defined region of interest within the sec-
tions.
The steps Overview Imaging, Ribbon Imaging, ROI Imaging and Re-Shoot are image acquisi-
tion steps. The step Re-Shoot gives you the opportunity to image parts of the ROI-series or tiles
of a tile image, later.
The wizard consists of 7 steps which are described in the following chapters:
In this step you can acquire an overview image that allows navigation on the sample. You will see
the positions of the serial sections on the sample carrier. In general, for the overview image an
objective with low magnification is used. This makes the acquisition fast due to a large field of
view and limited number of tiles.
Image acquisition with objectives with higher magnification is possible, but keep in mind that the
number of tiles will increase due a smaller field of view as well as the acquisition time.
Info
We recommend using phase contrast images for the acquisition. The used algorithm for the
automatic section specification, see step 4, is most reliable then.
Parameter Description
Image Acquisition/ By selecting the corresponding button, you can decide whether to ac-
Load Image quire an overview image or load an image.
Image Selection If you have selected Load Image, a saved image file from the file sys-
tem can be chosen. Therefore simply click on the button and nav-
igate to your image file.
The wizard will jump to the wizard step according to the information
saved within the loaded image.
Experiment If you have selected Image Acquisition, you have to select an experi-
ment from the Experiment list.
Note that the experiment has to be set up and saved in advance be-
fore you enter the wizard.
Objective Here you can select the objective that you want to use for the acquisi-
tion of the overview image. As mentioned before, we recommend us-
ing an objective with a low magnification (e.g. 2.5x or 5x).
Channels Here you can select the channels that you want to use for the acquisi-
tion of the overview image.
You can use more than one channel in one run, when your micro-
scope is equipped with a motorized condenser.
Selected Light Here you can select the light source that you want to use for the ac-
Source quisition of the overview image.
The light intensity can be adapted if a corresponding light source is
selected.
Camera Settings Here you can adapt the camera settings like changing the exposure
time or activating/deactivating the shading correction.
If a shading correction has been performed and activated in the se-
lected experiment, the checkbox will also be activated automatically
in the wizard.
Software Autofo- Here you can activate the Software Autofocus functionality and apply
cus it to the overview image.
If activated, you can select the positions for focusing. During focusing
no live image will be visible.
Note that sensitive fluorescence labels might be bleached during the
autofocus process.
Parameter Description
Overview Image Here you can define the size of the overview image by defining a start
Definition and end position.
The software will calculate the area by means of the defined start and
end position (= overview image). The number of tiles and the memory
used will be displayed below the buttons.
- Set start posi- Sets the current stage position as start position of the image area.
tion
- Set end posi- Sets the current stage position as end position of the image area.
tion
- Move to start Moves the stage automatically to the defined start position. Note that
position the start position has to be defined before.
- Move to end Moves the stage automatically to the defined end position. Note that
position the end position has to be defined before.
No. of sample car- Only active if you use more than one sample carrier for one correlative
rier Z-Stack.
Here you have to select the number of the used sample carrier. The
total number of used sample carriers was defined in the CAT tool un-
der Select/Specify sample.
Move the stage to Activated: the stage will move to load position before moving to the
load position be- next correlative calibration marker.
fore xy movement
In case of using an AxioObserver, the objective revolver moves to load
position.
Acquire Overview Starts image acquisition. The acquisition can be stopped in between.
Image Then the button will change to Restart. Before you restart the image
acquisition, you can modify the settings.
The status of the image acquisition is shown in the status bar of the
software.
After the overview image is taken, the image can be stitched. If you
click on the Apply Stitching button, stitching will be carried out.
See also
2 Ribbon Definition [} 759]
This step is an optional step. It allows to image the ribbons with an objective with higher magnifi-
cation, if necessary. E.g. when you would like to define the regions of interest by means of the
sample structure that can only be identified using lenses with higher magnification.
Therefore you can mark the outlines of the ribbon on your sample. If you cannot see the sample
structures, due to the overview image was acquired with a too low magnification, you can image
your sample again using a higher magnification.
For marking the outlines in the image use the tools from the Ribbon Definition tab (e.g. Rectan-
gle, Circle or Polygon).
See also
2 Ribbon Imaging [} 759]
For this step, a split view will appear. On the left side you see the Live image. On the right side
you see the overview image with the defined ribbons.
Info
To modify either the Live image or the image with the defined ribbons click on the correspond-
ing container. The activated container will be marked with a white frame.
Again, like in step one for the overview image, you have to select the objective, channel, light
source and adapt the exposure time. Additionally, you have to generate a focus surface to ensure
that your sample will be in focus during the image acquisition.
If you click on the Create Ribbon Image button, the ribbon image will be acquired.
See also
2 Section Specification [} 759]
To determine the positions of regions of interest (ROIs) within the sections, you have to define the
sections. The section lines generate the reference system for the ROI positions. The sections are
marked and outlined with a frame.
Info
Note that you must mark one section on each ribbon. Meaning when your sample has three
ribbons, three sections have to be marked overall.
We recommend using phase contrast images for the section specification. The algorithm used for
the automatic section specification is most reliable when using phase contrast images. When
bright field images are used, the algorithm might be suboptimal. In that case you have the oppor-
tunity to add and to move the section contours manually.
On the Section Definition tab you will find tools and options for creating sections in the image.
Parameter Description
Reference Section
- Contour By selecting this mode, you can mark a contour line of a section.
- Keep Keeps the selected tool active. You can then use the tool several times
without interruption.
Section Index Here you can determine the starting number of the ribbons. This is
important when the ribbons of a sample are deposited on more than
one sample carrier.
Selected Channel Shows the selected channel which is used for the detection.
Detection Sensitiv- Here you can adjust the detection sensitivity from Low to High by us-
ity ing the slider.
This will be done by modifying the contrast thresholds for the section
detection algorithm.
When setting a low sensitivity, sections will be recognized even if the
contrast between the section and the substrate is low; disadvantage:
sections will be recognized, even in areas where no serial sections are
deposited.
When setting a high sensitivity, the algorithm only recognizes sec-
tions, if there is a high contrast between section and substrate. If not,
all sections were recognized, you have the possibility to copy section
contours or to stamp section contours.
Section Detection
- Apply Starts the section detection on the sample. The software tries to de-
tect each section within the ribbon.
Contrast Method
- Auto Auto is used by default. The system recognizes the contrast method
of the image automatically.
- Ph. Contrast Applies the phase contrast method. Even if you are using a brightfield
image, phase contrast will be applied as contrast method.
- Brigthfield Applies the brightfield contrast method. Even if you are using a phase
contrast image, brightfield will be applied as contrast method.
Use Internal Struc- Activate this checkbox only when sample structures are clearly visible
ture within the sections.
If activated, sample structures are used for section detection, addi-
tional to contrast differences between sections and substrate.
In case that sections are not detected properly, you have the possibil-
ity either to stamp section contours or to copy section contours.
Parameter Description
Post Definitions
- Stamp tool By selecting this tool, you can stamp in undetected sections after the
section detection is finished.
Therefore simply select the tool and move the mouse cursor in the
area nearby the last detected section. The cursor will change to a
stamp icon and you will be able to stamp in the missing section con-
tours.
- Accept Ref. If you click on this button, reference contours will transform into sec-
Section tion contours.
When the section detection is finished, you have different options for sorting the sections accord-
ing to your needs, if necessary. Therefore right-click on the detected sections. You will see a con-
text menu with the following sorting options:
Parameter Description
Sort § Sort all sections in reverse order
Sorts all sections which have a section contour. The initial section
with number 1 becomes the last section, the last section becomes
the first section with the number 1
§ Sort selected Ribbon elements in reverse order
Sorts the selected sections on a ribbon. The initial section with
number 1 becomes the last section, the last section becomes the
first section with the number 1
Copy selected Rib- Copies all section contours on the selected ribbon.
bon sections from
here
Paste Section(s) to Pastes the section contours (selection of certain section contours or all
here sections of a ribbon) to the selected position.
See also
2 ROI Specification [} 761]
In this step you can screen your sample for interesting sample regions (ROIs) and mark this area by
a graphical element. You can define several regions of interest within in one section.
On the ROI Definition tab you can draw either a rectangle, a circle or a freehand polygon/con-
tour. Click on the Apply button to automatically identify the region of interest in all other sec-
tions. It is also possible to Undo/Redo an action using the corresponding buttons. To remove a
graphical element select it and click on the Delete (bin icon) button.
Info
With the arrow keys on your keyboard you can jump from one ROI to the next ROI along the
series to check if the structure of interest is still within the defined region of interest.
See also
2 ROI Imaging [} 762]
With this step you can image the ROIs which are detected and marked in the previous step. The
tile images will be generated from all defined region of interests automatically.
Info
The size of the snapped tile images of a ROI series can change due to the number of tiles
which are necessary to image the defined region of interest. The number of tiles can vary due
to the bending of the ribbon.
See also
2 Re-Shoot [} 762]
8.3.8.7 Re-Shoot
This step is helpful if some tiles or regions of interest are blurry. These tiles/regions can be re-
placed by repeating the acquisition of the selected tiles or tile images. The procedure is as follows:
Parameter Description
Select Tiles If this mode is active, you can select the tiles which you want to re-
shoot.
Use the Z-Position slider under Dimension tab or the arrows within
the Image area to scroll through the acquired images.
If you found a tile image that you want to re-shoot, simply click on it.
Then the color of the image frame turns from red to green.
Note that all tiles or blurry regions have to be defined before the im-
age acquisition can be repeated.
Acquire If this mode is active. you can acquire the selected tiles again after the
focus was adjusted manually.
If you click on this button, the stage will move to the first tile and the
following buttons will appear:
Parameter Description
- Snap Acquires a new image.
- Correct In case the tile is brighter or darker, here you have the possibility to
Brightness adapt the brightness of the tiles image.
This wizard is used to align the single images of a Z-Stack image. The wizard consists of 6 steps
which are described in the following chapters:
In this step you can load your acquired Z-Stack images which you want to align. Therefore simply
click on the Load button and select the image file from the file system.
See also
2 Pre-Processing [} 763]
8.3.9.2 Pre-Processing
In this step you can perform pre-processing functions on the loaded image, e.g. Stitching (only for
tile images), Brightness and Contrast Correction (only for SEM images).
Parameter Description
Apply Stitching Only visible if a tile image is loaded.
button
If you click on this button, stitching is performed automatically on the
image. The stitching can be canceled (Undo) or repeated (Redo) by us-
ing the arrow buttons.
Clip Limit Reduces noise in the image. The higher the Clip Limit , the lower the
noise. The clip limit can be adjusted between 0 and 10%.
Region Size Defines the region for histogram equalization. The smaller the area,
the higher the contrast, but the noise will increase, too. The Region
Size can be adjusted from 16 to 1024 px.
Histogram Equal- If you click on this button, the SEM images are adapted to the se-
ization lected values. The Histogram Equalization can be canceled (Undo) or
repeated (Redo) by using the arrow buttons.
See also
2 Image Review [} 764]
This step is used for reviewing the single images of a Z-Stack. This is necessary because certain im-
ages might not be useful for 3D reconstruction due to problems during the image acquisition or
sample preparation issues (wrinkles or ruptures within the section). These regions can be replaced
either by the previous image or by the following image. To review the images, the images can be
displayed as single 2D images in the 2D view or as images series in the Gallery view.
Parameter Description
2D View If selected, you can review the single images of a Z-Stack image by us-
ing the 2D view.
You can use the Z-Position slider to navigate through the single im-
ages.
To replace an image select the image and click whether on the Re-
place with next or Replace with previous button.
If you click on the Undo button, the last action performed will be un-
done.
Gallery View If selected, you can review the single images by using the Gallery
view. The single images of a Z-Stack image are displayed as an image
gallery.
If you found an image that does not meet your expectations, simply
select the image and replace it by the next or previous image.
See also
2 Alignment [} 764]
8.3.9.4 Alignment
In this step you perform the image alignment. Therefore simply click on the Start Alignment but-
ton. To cancel the alignment click on the Stop button.
Info
Before you start the alignment, select one channel as reference channel (e.g. DAPI, because it
stains the nucleus and the nucleus is a proper structure for performing alignment).
During alignment, a splitter view is visible. In the left container you can see the original images, in
the right container you can see the aligned images.
See also
2 Manual Correction [} 764]
In this step (optional) you can navigate through the aligned images and check the result of the
alignment.
In case the results are unsatisfactory, you have the possibility to correct the alignment of the im-
ages manually. Misalignment can occur when no characteristic structures are visible within the im-
ages.
See also
2 Final Image Creation [} 765]
Last but not least, in this step you create the final image.
Parameter Description
Total If selected, the complete image will be used for the image creation.
ROI If selected, only the ROI area will be used for the image creation.
This wizard is used to correlate a Z-Stack image from the Light Microscope (LM) with the Z-Stack
image from the Scanning Electron Microscope (SEM). The wizard consists of 4 steps which are de-
scribed in the following chapters:
In this step you can import the aligned Z-Stack images from the LM and the SEM, e.g. the Z-Stack
image from the LM in the left container and the Z-Stack image from the SEM in the right con-
tainer.
If you click on the Left Container button, the image is opened in the left image container.
If you click on the Right Container button, the image is opened in the right image container.
See also
2 Correlation [} 765]
8.3.10.2 Correlation
Parameter Description
Transform Here you select which Z-Stack will be transformed. During transforma-
tion, a pixel in the overlay image of the Z-Stack is calculated by using
pixels of the two original images that shall be overlaid/merged.
Interpolation Here you can select one of the following interpolation methods:
- Nearest The gray value of the resulting pixel in the overlay image is made of a
Neighbor pixel which is located next. This interpolation method is very fast.
- Cubic The calculated pixel in the overlay image is assigned to a gray value,
which is calculated by means of a polynomial function using gray val-
ues of pixels in the original images; these pixels are located nearby the
calculated pixel.
- 3-Points If selected, this mode enables you to set 6 correlation points after
clicking on the Set Points button (3 points in each Z-Stack in each
container).
- 4-Points If selected, this mode enables you to set 8 correlation points after
clicking on the Set Points button (4 points in each Z-Stack), 3 points
in the first z-section, the last point in the last section.
Correlation Points If you click on the Set Points buttons, you can set the correlation
points.
The number of correlation points is according to the selected algo-
rithm. The cursor will change to a pipette symbol. Simply click in the
image to set the points. Start with setting the first three points in the
left container then set the corresponding correlation points in the
right container. If a correlation point is set, a check mark icon will ap-
pear in front of the corresponding point.
When you select the 4-Points-Algorithm, the display will move auto-
matically to the last image of the Z-Stack. Set the fourth correlation
point in both containers. Make sure that the positions in both Z-
Stacks are identical. After you have set all correlation points, the cur-
sor will be changed backwards from the pipette to the arrow.
Reset deletes all correlation points in the image.
Create correlated If you click on this button, the correlated Z-Stack will be generated
Z-Stack and opened in a new image container.
See also
2 Manual Correction [} 766]
In this step you can correct the correlation manually by moving and rotating the transformed im-
age.
Therefore simply click on the Start button. Then you can interactively move the image by drag-
ging and dropping it with the mouse or rotate the image by clicking on the circle button attached
on top of the green image frame or using the Rotation slider. You can also change the image
opacity by adjusting the corresponding slider.
If you click on the Accept button the manual correction, will be adopted to the correlated image.
See also
2 Create Final Correlation Image [} 767]
In this step create the final correlation image. Therefore simply click on the Create final Z-Stack
button. Click Finish to exit the wizard.
Name Image
Life Science cover glass
22 x 22
Name Image
Cover glass with fiducials
22 x 22
Name Image
MAT Flat Stubs
MAT Universal A
Name Image
MAT Universal B_A
This module allows you to use FCS (Fluorescence Correlation Spectroscopy) functionality together
with the Airyscan detector. The functionality has to be licensed. Without a license, you can only
open an existing document in one view with limited interaction.
The license Dynamics Profiler Basic expands the capability of the Airyscan detector and utilizes
the area information gathered from 32 circular arranged detection elements for fluorescence cor-
relation spectroscopy (FCS). A wizard guides you through the acquisition of your data. Based on a
reference image, you can evaluate up to 10 measurement spots consecutively and analyze them
with various fit models. Additionally, you can get access to the raw data of all Airyscan detection
elements for individual analysis.
The license Dynamics Profiler Advanced additionally enables you to analyze the dynamics with
pair correlation (e.g. to investigate diffusion barriers), and the active flow by providing speed in
µm/s and its direction.
The Dynamics Profiler functionality works for all RGB laser lines with the following wavelengths:
Info
405nm Laser
Note that you can also use the laser wavelength of 405nm, but only for acquiring the refer-
ence image. Performing spot measurements is not possible with 405nm. Additionally, to be
able to start the acquisition wizard, the 405nm must not be the selected main track.
Info
Acquisition with data compression
If you want to acquire your data with lossless compression, go to Tools > Options > Acquisi-
tion > Data Compression and make sure that Zstd (lossless) is selected.
Prerequisite ü You have started ZEN with the necessary licenses and set up your hardware.
ü You have set up an acquisition experiment with one or two Airyscan SR tracks. The FCS func-
tionality for Airyscan is only available, if you have set up a suitable experiment with supported
objective and laser line(s), see also Overview Supported Laser Lines [} 771]. Note that if you
have added two Airyscan tracks, the one currently selected in the Imaging Setup or Chan-
nels tool is the main track for the spot measurement, the other is considered the second
track.
1. On the Acquisition tab, in the Imaging Setup tool, click Dynamics Profiler.
à The Align Airyscan Detector step of the wizard opens.
2. Click Adjustment.
à An automatic adjustment of the Airyscan detector starts and the button changes to a
Stop button.
à If the automatic adjustment fails, dedicated controls are displayed on the left to manually
configure the detector adjustment.
3. Use the controls on the left side to adjust your Airyscan detector.
à The Quality and Status of the detector is displayed and updated on the left side.
à When the detector adjustment is successfully completed, the Snap Reference Image
step opens automatically.
4. Use the options of the step to set up the acquisition of the reference image. You can also
start a continuous acquisition by clicking Continuous and change the parameters with di-
rect visual feedback.
à If you click Continuous, a continuous acquisition starts. To continue, you have to click
Stop.
5. If you have set up your experiment with two tracks and want to acquire the reference im-
age with both, activate Second Track.
à The laser setting for the second Airyscan track is displayed if you need to adjust them,
the Gain slider is active for both tracks.
6. Click Snap Reference Image.
à Your reference image is acquired with the current settings.
7. Click Next.
à The Set Up Acquisition Spots step opens.
8. In the Spots section, click + (activated by default) and click on your spots of interest in the
reference image. Note that you can add a maximum of ten spots.
à The spots are added to the experiment and displayed in the list on the left.
9. If you want to evaluate an individual spot, select it in the list and click Evaluate Spot. You
can also move the spot during evaluation. You can also change the z-position of the spot
during evaluation. However, note that the reference image does not adjust to the new z-
position. In this case we recommend creating a new reference image by clicking Snap.
10. Set up the parameters for your time series experiment, including the measurement time.
Note that the maximum for spot measurement is 300 seconds.
11. Click Start Experiment to start the time series experiment for all spots.
à The scan experiment starts. The data is displayed in the table and charts in the Center
Screen Area.
à After the experiment is finished, the wizard closes automatically. If you have activated
Create a Reference Image after Spot Measurement, a second reference image (Post
Experiment image) is acquired after the spot experiment and before closing the wizard.
The Dynamics Profiler document is displayed in the main user interface of ZEN. You can now save
it and analyze the data with the help of the three dedicated views.
Dynamics Profiler documents are also supported by ZEN Connect, see ZEN Connect [} 619].
If you have activated Keep Positions and Keep Acquisition Settings in the last wizard step, the
spot positions and acquisition settings are taken over to the user experiment and are available for
the next acquisition with the wizard.
If you open the Dynamics Profiler document in the Info view, you can see the metadata of the
spot experiment. If you want to see the metadata for the reference image, open it as a separate
image in ZEN, see Opening the Reference Image as Separate Image [} 773].
See also
2 Adjust Airyscan Detector [} 775]
2 Snap Reference Image [} 776]
2 Set Up Acquisition Spots [} 777]
Prerequisite ü A Dynamics Profiler document is open in the Correlation, Diffusion, or Flow view.
1. Right click in the data table.
à A context menu is displayed.
2. If you want to save the complete table content as a text file, select Write Text to File.
à A file browser opens.
3. Select a folder, enter a name for the document and click Save.
à The data of the table is saved as a tab separated text file (headers in the document corre-
spond to the column names of the table) in the selected folder.
Prerequisite ü A Dynamics Profiler document is open in the Correlation, Diffusion, or Flow view.
Prerequisite ü A Dynamics Profiler document is open in the Correlation view and the count rate chart is dis-
played.
1. In the table, select the spot entry for which you want to add a region. If you want to draw
on region for several spots, you can also select multiple spots by pressing Ctrl.
à The chart is updated according to your selection.
The data of the region is used for the Diffusion and Flow view. The selection is saved in the
metadata of the document and is restored after it is reopened.
Prerequisite ü You have created a Dynamics Profiler document where you have set the Time Resolution to
ωr in the third step of the wizard.
ü This Dynamics Profiler document is open in the Correlation view.
ü The objective for your spot experiment must be the same as the objective you use for the ωr
calibration.
1. In the view options, open the ωr Calibration tab.
2. From the Dye dropdown list, select which dye you want to use for the calibration.
à The value for Diffusion Coefficient is updated automatically.
3. If you know the Diffusion Coefficient for your experiment, adapt the value in the input
field accordingly.
4. Click Calibrate.
à A file browser opens.
5. Select the folder where you want to save the calibration file and enter a file name.
6. Click Save.
à The custom ωr calibration is created and the file is saved.
See also
2 Using Custom ωr Calibration [} 774]
à The custom ωr calibration is loaded and the file name is displayed next to the button.
With this step you can adjust your Airyscan detector. Airyscan detector adjustment ensures good
data quality for multiple cross-correlation measurements. The controls are provided in case the
automatic adjustment fails.
Parameter Description
Adjustment Starts an automatic detector adjustment. When the adjustment is run-
ning, it changes to a Stop button.
Quality and Status Displays the status of the Airyscan detector adjustment.
Detector View Displays the current intensity distribution of the emission signal over
the detector elements.
The following parameters are only available if the automatic adjustment fails:
Zoom Sets the zoom level. Clicking the 1.x button behind the input field re-
sets the zoom level to the default.
High Intensity Activated: Enables the higher laser power range between 0.2 and
Laser Range 100%. This setting is applied to all laser lines.
Master Gain Sets the master gain to control the voltage of the PMTs. Increasing
the gain of the PMT corresponds to a higher voltage of the detector.
The image gets brighter and you may be able to reduce the laser
power.
With a higher voltage, the noise level in the image increases as the
dark noise of the detector gets visible in the images, predominantly as
single bright pixels. The optimum between gain and noise depends on
your experimental requirements and on your sample.
Scan Speed Sets the scan speed with the slider or input field. The corresponding
values for Frame Time and Pixel Time are displayed above the
slider.
Fiber Position Displays controls for the manual adjustment of the fiber positions.
– Store Invis Activated: Saves and stores the current values for the next align-
Correction Po- ment.
sition Auto-
matically
– Store Current Stores the current positions of x and y for further reference.
Pos.
Parameter Description
– Move To Moves to the currently stored position and updates the X Position
Stored Pos. and Y Position accordingly.
See also
2 Setting Up and Performing a Dynamics Profiler Acquisition [} 771]
Parameter Description
Continuous Starts a continuous image acquisition. When the acquisition is run-
ning, it changes to a Stop button to stop the acquisition.
Zoom Sets the zoom level. Clicking the 1.x button behind the input field re-
sets the zoom level to the default.
Scan Speed Set the scan speed with the slider or input field. The corresponding
values for Frame Time and Pixel Time are displayed above the
slider.
– Max Automatically sets the maximal possible scan speed. When you click
this button once, it changes its color to blue. This indicates that the
system always uses the highest possible scan speed as you continue
to change acquisition parameters like zoom or frame size. Click the
button again to deactivate this permanently active state.
Frame Size Adjusts the frame size (in pixel) for the image acquisition. The corre-
sponding Image Size and Pixel Size are displayed above the input
fields.
– Presets Opens a dropdown which allows you to select a frame size from a list
of default frame sizes.
Sampling
– Confocal Sets the frame size (image resolution) to an optimal value correspond-
ing to the optical magnification (objective), the zoom factor and the
wavelengths included in the experiment. This provides an image
where no spatial information is lost and no empty data is generated
as optimal sampling is achieved. The confocal value is calculated for
the given objective and magnification settings matching a 1 fold sam-
pling according to 1 time Nyquist. Rectangular image dimensions are
preserved.
– The laser scans in one direction only, then moves back with beam
blanked and scans the next line.
– The laser also scans when moving backwards, i.e. the scan time is re-
duced by about a factor of two. Note that bi-directional scanning can
result in a pixel shift between forward and backward movement (dou-
ble image).
Parameter Description
High Intensity Activated: Enables the higher laser power range between 0.2 and
Laser Range 100%. This setting is applied to all laser lines.
Second Track Only visible if you have defined two Airyscan tracks.
Activated: Uses also the second Airyscan track to acquire the refer-
ence image and displays the laser setting for the second track.
Master Gain Sets the master gain to control the voltage of the PMTs. Increasing
the gain of the PMT corresponds to a higher voltage of the detector.
The image gets brighter and you may be able to reduce the laser
power.
With a higher voltage, the noise level in the image increases as the
dark noise of the detector gets visible in the images, predominantly as
single bright pixels. The optimum between gain and noise depends on
your experimental requirements and on your sample.
Snap Reference Creates the reference image with the current settings.
Image
See also
2 Setting Up and Performing a Dynamics Profiler Acquisition [} 771]
In this step, you set up your spots, i.e. the measurement positions. In the center screen area of
this step, you have the information as in the Correlation view.
Parameter Description
Spot Measurment
– Starts the selection mode that allows you to select and move the
spots in the reference image.
– + Allows you to add new spots in the reference image. You can add a
maximum of ten spots.
– Evaluate Spot Starts a scan for the currently selected spot. When the spot scan is
running, it changes to a Stop button to stop the acquisition.
Parameter Description
– Moves the currently selected spot one position down in the list.
Move Down
High Intensity Activated: Enables the higher laser power range between 0.2% and
Laser Range 100%. This setting is applied to all laser lines. Note that selecting a
value below 0.2% is not possible in this step.
– Spot Measure- Uses a pixel time of 1.2 μs and a maximum measurement time of 300
ment seconds. This option is the recommended default for spot measure-
ment.
– ωr Calibration Uses a pixel time of 0.5 µs and a maximum measurement time of 100
seconds. The recommended measurement time for ωr calibration is 60
seconds.
Reference Image
– Create a Ref- Activated: Automatically acquires a second reference image after the
erence Image spot experiment is finished.
after Spot
Measurement
Start Experiment Starts the time series experiment for all spots.
Keep Positions Activated: Saves the spot positions defined in the wizard and takes
them over into the current experiment in ZEN.
Keep Acquisition Activated: Saves the acquisition settings defined in the wizard and
Settings takes them over into the current experiment in ZEN.
See also
2 Correlation View [} 778]
2 Setting Up and Performing a Dynamics Profiler Acquisition [} 771]
This view displays your experiment and the correlation data of the individual measurement posi-
tions. The different areas containing the image, the charts and the table can be resized by drag-
ging the individual area boundaries with the pressed left mouse button.
1 2
1 Preview Image
Here you can see the acquired preview image with the measurement positions marked in
the view.
2 Parameter Table
This table displays the data for the different measurement positions. Each position is dis-
played as a row. You can select one or multiple rows, which updates the information dis-
played in the charts. For details, see Correlation Parameter Table [} 779].
3 Charts
Here you have the result charts visualizing the measurement data for the selected spots.
You can zoom into the displayed curves with the mouse wheel or by drawing a zoom
rectangle with pressed left mouse button. A right click in the chart area resets the chart
(zoom) to the default view.
4 View Options
Here you have the area for your standard view options as well as the specific ones pro-
vided on the Correlation Tools tab, see Correlation Tools Tab [} 780].
This table displays the data for the different measurement positions. Each position is displayed as
a separate row. If a value could not be calculated for a specific position, the corresponding entry
in the table is empty. A click on an entry highlights and selects the entire row. You can also select
multiple rows when pressing the Ctrl button while selecting the entries. The charts are then up-
dated accordingly. The units for individual parameters are displayed as tooltips. A right click in the
table opens a specific context menu, see Parameter Table Context Menu [} 788].
Parameter Description
Ø Activated: Uses this position for the calculation of an average for in-
dividual values. The average is displayed on the bottom of the table.
Parameter Description
Color Sets the color for the respective measurement point by clicking on the
field and selecting a color form the dialog.
Name Displays and sets the name for the individual position.
CPM Displays the CPM in kHz. The color of the value indicates the data
quality, see Data Quality Criteria (CPM) [} 788].
Number of Mole- Displays the number of molecules found at the measurement position.
cules It is calculated by 1/correlation amplitude, where the correlation
amplitude is defined as value of the correlation curve at the smallest
lag time.
Triplet State Frac- Displays the triplet state fraction. One physical property of the utilized
tion fluorophore. Dependent on intensity and if applicable oxygen. A
triplet state of e.g. 16% would mean that 16% of all fluorophores are
in dark state.
Chi2 Displays the Chi2 value, see also Fit Calculation [} 789]. Chi2 is the de-
viation between the measured FCS curve and the fit curve. A value of
0 would mean a 100% accordance.
See also
2 Calculation of Concentration and Diffusion Coefficient [} 788]
Parameter Description
Reference Image Only visible if you have created a second reference image after the
spot experiment.
Selects which reference image is displayed int the view.
Parameter Description
– Post-Experi- Displays the reference image acquired after the experiment.
ment
Display Chart Selects which chart is displayed above the correlation chart.
– Residuals Displays the residuals chart. Note that the residuals chart only con-
tains information after a fit was executed for the respective spot.
Filter
– Detrending Activated: Applies a detrending filter to remove the trend (e.g. due
to the photo bleaching) from the experiment data. Detrending is cal-
culated by a low-pass filter with the filter time constant provided with
the input field.
Fit Model Selects the fit model for the correlation curve.
Amplitude Normal-
ization
ωr Calibration Selects which ωr should be used for the fit of the curves.
See also
2 Fit Calculation [} 789]
2 Filters [} 789]
2 Normalization of Correlation Curves [} 791]
2 Using Custom ωr Calibration [} 774]
This tab enables you to create your own custom ωr calculation that you can use for fitting the cor-
relation curves.
Parameter Description
Dye Selects the dye used for the ωr calibration.
Diffusion Coeffi- Displays and sets the diffusion coefficient for the dye.
cient
Calibrate Calibrates the ωr value and opens a file browser to save it as a calibra-
tion file.
See also
2 Creating a Custom ωr Calibration [} 774]
2 Using Custom ωr Calibration [} 774]
This view displays your experiment and the diffusion data of the individual measurement posi-
tions. For information about calculations, see Calculation of Concentration and Diffusion Coeffi-
cient [} 788]. The different areas containing the image, the charts and the table can be resized by
dragging the individual area boundaries with the pressed left mouse button.
1 2
1 Preview Image
Here you can see the acquired preview image with the measurement positions marked in
the view.
2 Parameter Table
This table displays the data for the different measurement positions. Each position is dis-
played as a separate row. For details, see Diffusion Parameter Table [} 784].
3 Polar Heatmap
This diagram area gives you the visual information about the diffusion in your sample
and which Airyscan fibers are used for the correlation, see Polar Heatmap [} 783]. A
right click into the area enables you to export the polar heatmap as an image.
4 Correlation Chart
Here you have the correlation chart for the currently selected spot. You can zoom into
the displayed curve with the mouse wheel or by drawing a zoom rectangle with pressed
left mouse button. A right click in the chart area resets the chart (zoom) to the default
view.
5 View Options
Here you have the area for your standard view options as well as the specific options
provided on the Diffusion Tools tab, see Diffusion Tools Tab [} 785].
The polar heatmap illustrates directed diffusions and barriers by using color gradients referring to
the number of present molecules in specific angles. On the top you have a visual representation of
the Airyscan detector that indicates the fibers that are used for the correlations. These heatmaps
are calculated from six pair correlation curves and the pair correlation function (pCF). The pair cor-
relation function is a set of cross-correlations between the central fiber and all fibers located at a
given distance from the center. The rainbow pattern is used to map the diagram color to the cor-
responding correlation amplitude. The color mapping and legend are the same for both heatmaps
and the color palette can be selected in the Diffusion Tools tab. A right click on the heatmap
displays a menu that allows you to save it as an image on your PC. The displayed degrees can vary
based on your system configuration.
See also
2 Pair Correlation of Airyscan Fibers [} 791]
This table displays the data for the different measurement positions. Each position is displayed as
a separate row. If a value could not be calculated for a specific position, the corresponding entry
in the table is empty. A click on an entry highlights and selects the entire row and displays the
corresponding data in the chart. The units for individual parameters are displayed as tooltips. A
right click in the table opens a context specific menu, see Parameter Table Context Menu
[} 788].
Parameter Description
Ø Activated: Calculates an average of the individual values. The aver-
age is displayed on the bottom of the table.
Color Sets the color for the respective measurement point by clicking on the
field and selecting a color form the dialog.
Name Displays and sets the name for the individual position.
CPM Displays the CPM in kHz. The color of the value indicates the data
quality, see Data Quality Criteria (CPM) [} 788].
Number of Mole- Displays the number of molecules found at the measurement position.
cules It is calculated by 1/correlation amplitude, where the correlation
amplitude is defined as value of the correlation curve at the smallest
lag time.
Triplet State Frac- Displays the triplet state fraction. One physical property of the utilized
tion fluorophore. Dependent on intensity and if applicable oxygen. A
triplet state of e.g. 16% would mean that 16% of all fluorophores are
in dark state.
Chi2 Displays the Chi2 value, see also Fit Calculation [} 789]. Chi2 is the de-
viation between the measured FCS curve and the fit curve. A value of
0 would mean a 100% accordance.
See also
2 Calculation of Concentration and Diffusion Coefficient [} 788]
Parameter Description
Reference Image Only visible if you have created a second reference image after the
spot experiment.
Selects which reference image is displayed int the view.
Correlation Chart Selects which correlation chart and which curves are displayed in the
correlation chart and allows you to set the color for the curves. You
can select to see the chart for Inner Elements or Outer Elements.
– Curve Binning Activated: Smoothes the curves by averaging for noisy data.
Amplitude Normal-
ization
Polar Heatmap Displays and selects the color palette used for the visualization in the
Palette polar heatmap.
See also
2 Pair Correlation of Airyscan Fibers [} 791]
2 Normalization of Correlation Curves [} 791]
This view displays your experiment and the flow information. The different areas containing the
image, the charts and the table can be resized by dragging the individual area boundaries with
the pressed left mouse button.
1 2
1 Preview Image
Here you can see the acquired preview image with the measurement positions marked in
the view.
2 Parameter Table
This table displays the data for the different measurement positions. Each position is dis-
played as a row. For details, see Flow Parameter Table [} 786].
3 Flow Diagram
This diagram illustrates the flow based on the Airyscan detector. The arrow indicates the
flow direction. For information about the flow calculation and the classification for flow
speed, see Flow Calculation [} 790]. The displayed degrees can vary based on your sys-
tem configuration. A right click into the area enables you to export the flow diagram as
an image.
4 Correlation Chart
Here you have the correlation chart for the individual spots. You can zoom into the dis-
played curves with the mouse wheel or by drawing a zoom rectangle with pressed left
mouse button. A right click in the chart area resets the chart (zoom) to the default view.
5 View Options
Here you have the area for your standard view options as well as the specific options
provided on the Flow Tools tab, see Flow Tools Tab [} 787].
This table displays the data for the different measurement positions. Each position is displayed as
a separate row. If a value could not be calculated for a specific position, the corresponding entry
in the table is empty. A click on an entry highlights and selects the entire row and displays the
corresponding data in the chart. The units for individual parameters are displayed as tooltips. A
right click in the table opens a context specific menu, see Parameter Table Context Menu
[} 788].
Parameter Description
Ø Activated: Uses this position for the calculation of an average for in-
dividual values. The average is displayed on the bottom of the table.
Parameter Description
Toggles the visibility of the respective spot.
Visibility
Color Sets the color for the respective measurement point by clicking on the
field and selecting a color form the dialog.
Name Displays and sets the name for the individual position.
CPM Displays the CPM in kHz. The color of the value indicates the data
quality, see Data Quality Criteria (CPM) [} 788].
Number of Mole- Displays the number of molecules found at the measurement position.
cules It is calculated by 1/correlation amplitude, where the correlation
amplitude is defined as value of the correlation curve at the smallest
lag time.
Flow Direction Displays the value that indicates the flow direction.
Flow Speed Confi- Displays the confidence interval for the flow speed.
dence Interval
Parameter Description
Reference Image Only visible if you have created a second reference image after the
spot experiment.
Selects which reference image is displayed int the view.
Correlation Chart Selects which curves are displayed in the correlation chart and allows
you to set the color for the curves.
Fit Model Selects the fit model for the correlation curve.
Parameter Description
– Show Fitted Activated: Displays the fitted curves in the chart.
Curves
See also
2 Fit Calculation [} 789]
When you right click in the parameter table, you have the following option:
Parameter Description
Write Text to File Saves the data as a .txt file. You are prompted to enter a name and
select a folder before saving.
See also
2 Exporting Data From Tables [} 772]
§ where
8.4.15.3 Filters
Detrending Filter
The detrending filter is applied to remove the trend (e.g. due to the photo bleaching) from the ex-
periment data. The detrended signal Id(t) is calculated as follows:
with
The FilterWindowParameter [ms] is set by the user.
Dust Filter
The dust filter allows you to remove high intensity peaks (e.g. caused by aggregated objects) from
the experiment data. Calculations are done on the 500x down sampled signal data. For compari-
son, intensity data shown in the count rate chart is down sampled with factor 2000x. The data
bin is classified as dust (filtered out) if the following criteria is fulfilled:
UserParameter * I(t) > Mean(I(t))
If both the detrending and dust filter are activated, the dust filter is applied after the detrending
filter.
The unweighted fit of the correlation curve is done using the Levenberg-Marquart algorithm. Re-
duced chi2 is calculated as follows:
where Gt is the correlation curve value, Ft the fit value, N the number of correlation/fit values and
DF the number of fit parameters.
One Component 2D
One Component 3D
Two Component 2D
Two Component 3D
Parameters
The parameters used in the calculations are the following:
Parameter Description
A The correlation amplitude.
In order to determine flow speed and direction, all possible cross-correlations are calculated at a
distance of two fibers within three inner fiber rings. All correlations in the same direction are aver-
aged, resulting in six correlation curves (30°, 90°, ..., 330°; by our definition, 0° means vertical
flow direction). All six curves are fitted globally with the following model:
Parameter Description
v The flow speed (fit parameter).
ϕ The angle between the flow and the vector connecting the fibers in
this group of cross-correlations (fit parameter).
Parameter Description
r0 The apparent distance between the fibers (calibration parameter).
Gdiff(τ) The diffusion term (2D or 3D), see Fit Calculation [} 789].
Other parameters are defined as for the fit calculations, see Fit Calculation [} 789].
The calculated flow speed values are classified depending on the diffusion time of the analyzed
fluorophore and also based on statistical quality of the flow correlation curves:
Amplitude normalization of the correlation curves allows you to compare different correlation
curves and see the differences in diffusion times between them.
If the fit of the correlation curves is already done, activating the amplitude normalization is done
using the amplitude fit parameter. Selected correlation curves are rescaled that each selected
curve has a correlation amplitude equal to 1 (corresponding number of molecules = 1).
where Amean is the mean amplitude calculated over the first 10 points of the correlation curve:
The pair correlation function (pCF) is a set of cross-correlations between the central fiber and all
fibers located at a given distance from the center. For a better signal to noise ratio, cross-correla-
tions are calculated in both directions and then averaged. The pCFs are used and displayed in the
polar heatmaps of the Diffusion view.
The Diffusion view can display the data of two different sets of fibers. The group called Outer El-
ements comprises fibers with a slightly bigger distance from the center (blue element) and is illus-
trated with orange in the graphic below. The second group is called Inner Elements and com-
prises fibers with a slightly smaller distance from the center. They are illustrated with the green
color in the graphic below.
8.4.15.8 ωr Calculation
and D (diffusion coefficient) needs to be provided and added in the ωr Calibration tab. The diffu-
sion coefficients of commonly used dyes are already specified in a dropdown list. The following
requirements need to be fulfilled in order to generate a suitable calibration file:
Four values are calculated and can be accessed by opening the calibration file with a text editor:
§ W0Ring3: Effective radius of the observation spot (detector elements 1 to 19 binned together)
§ W0Ring123: Average radius of the observation spot for a single detector element (average
value for detector elements 1 to 19)
§ R0Ring123: Effective distance between the detector elements for flow measurements
§ StructuralParameter: Ratio between the observation spot waist in the axial and lateral direc-
tion
The generated ωr calibration file can be used in the Correlation Tools tab to replace the default
ωr calibration values.
See also
2 ωr Calibration Tab [} 782]
2 Correlation Tools Tab [} 780]
Correlation values are calculated for the lag times τ, commonly used in 16/8 multi-tau hardware
correlators. This means the lag time τ is increased linearly with the τ =PixelDwellTime for the first
16 values. Then the lag time interval for τ is doubled, and the next eight lag times are calculated.
The lag time interval is doubled after each next eight time lag values.
The very first correlation value, corresponding to the one PixelDwellTime, is skipped since it is bi-
ased due to correlation of the neighboring data points after low pass filter due to digitalization.
This module offers functionality for the processing of FIB-SEM stacks. This chapter describes how
the different functions of the EM Processing Toolbox can be used to process a FIB-SEM-stack ac-
quired with SmartFIB in the ZEN software. Note that parts of this special workflow also require
functionalities of the ZEN Connect module. To make yourself familiar with this module, see also
the documentation for ZEN Connect [} 619].
See also
2 EM Processing [} 105]
This chapter gives an overview how you can process your FIB-SEM stacks and align them. Con-
sider the following workflow:
1. Sorting of image files:
With the function Sort SmartFIB Tiffs [} 117] you can sort your .tiff image files created by
SmartFIB according to channel name, number of pixels, image size, and spacing of the im-
ages, corresponding to slice thickness. Note that this function only works if the tiff files have
their default names (e.g. channel0_slice_0001.tiff or slice_0001.tiff)! Do not rename your files
before you use this function!
If you have an imported FIB-SEM stack which requires manual pre-alignment before using the au-
tomatic z-alignment, you can use the image processing function Coarse Z-Stack Alignment for
a manual alignment of your stack. See the following instruction.
Prerequisite ü You have opened your (imported) z-stack that needs alignment in ZEN.
1. On the Processing tab, select the image processing function Coarse Z-Stack Alignment.
2. Click on the Setup button.
à The Coarse Z-Stack Alignment Setup [} 106] opens.
à The current z-plane (selected by the Z-Position slider in the Dimensions tab) is displayed
in cyan and the following z-plane is displayed in red.
3. If you have an image with multiple channels, select the channel which should be displayed
in the Image View with the Channels tool.
4. With the Z-Position control in the Dimensions tab, go to the z-plane where the image
needs an alignment.
5. Select the Speed with which your alignment should be performed.
6. Shift the following z-planes with the arrow buttons in the top left of the setup until the x/ y
shift in the z-stack seems to be eliminated. Alternatively, you can also use the arrow keys
on your keyboard.
7. If necessary, adjust the speed step size during the alignment.
8. Repeat the steps 4 to 6 until all shifts in the z-stack are corrected.
à Every shift is displayed in the Shift List on the left side of the setup.
9. Click on Finish to save the changes and close the setup.
à The Shift List with all the shifts is displayed in the Parameters tool.
10. On the top of the Processing tab, click on Apply.
You have now aligned the planes in your z-stack manually to correct a shift in x and/or y within
the z-stack.
See also
2 Aligning Z-Planes Automatically (Based on a ROI) [} 795]
With the image processing function Z-Stack Alignment with ROI [} 118] you can perform an au-
tomatic alignment of the z-planes in a stack. This alignment can be based on a particular region of
interest that you can draw into your image.
10. Under Channel Component, select one of the image channels whose alignment transfor-
mation matrix is then also applied to the other channel(s). If an alignment should be calcu-
lated for each channel individually, deactivate the checkbox Single Component. This step
is only applicable for images with multiple channels.
11. At the top of the Processing tab, click on Apply.
The z-planes of your image are now aligned automatically. The progress of the alignment process
is displayed in the progress bar on the bottom of the ZEN software.
With the processing function Slices Replacement [} 117] you can replace slices of a z-stack with
the previous or next slice in the stack.
With the processing function Cut Out Regions [} 109] you can define a region in your z-stack and
cut it out as a volume.
5. In the image, draw your so-called support region to mark the structure of interest.
6. Repeat the previous steps for the last slice where your structure of interest appears as well
as for those slices in between where significant changes (in shape and/or position) of this
structure take place.
7. Click on Interpolate.
à Interpolated regions are created for all slices between the support regions. The interpo-
lated regions are displayed with a slightly darker color.
8. Use the Z-Position slider to move through your stack and examine whether the interpo-
lated regions satisfyingly cover the structure of interest in your stack.
9. If you find a slice where the interpolated region does not cover your structure of interest,
draw a new support region on this slice and click on Interpolate again.
10. When the entire structure is marked satisfyingly by the interpolated regions, click on Finish.
à The Define Region setup closes and your image with the newly created regions is dis-
played in the Analysis view.
11. On the Processing tab, click on Apply.
Your marked structure is now cut out of the z-stack and opened as a new image.
In ZEN you can import SmartFIB stacks of Crossbeam microscopes. The orientation of these stacks
differs from standard z-stack acquisition, as the acquired images are tilted by a certain angle com-
pared to a z-stack acquired on a light microscope. The import function calculates this tilt from the
metadata of the image. If the import finds no metadata concerning the tilt angle and the user
does not enter a value for the sample angle, it uses a default angle of 54 degrees (default angle
between FIB and SEM column at the Crossbeam) and the image is rendered with a 90 degree tilt
when displayed in a ZEN connect project. Alternatively, you can enter the angle of your sample
during import, e.g. as set during acquisition of the stack with SmartFIB, and the import then cal-
culates the tilt angle based on this sample angle.
During import, the XY offset metadata of the individual slices is ignored by default and only the
offset of the first tiff file is considered. This default avoids the creation of a slanted z-stack, how-
ever in certain cases, such as on-grid-thinning configuration, the XY offset of the individual slices
needs to be taken into account.
1. On the Processing tab, select the image processing function Import SmartFIB TIFFs.
à The function settings are displayed in the Parameters tool.
2. Click on Select Files.
à A file browser opens.
3. Select the images you want to import as FIB stack.
Note: Select only images with consistent metadata with respect to number of pixels, image
size, and spacing of the images (i.e. use the Sort SmartFIB tiffs function before importing
the data).
Note: To make sure the stack is composed/ ordered correctly, watch out how the images
are sorted in the explorer and in which order you choose them.
4. Enter a File name for the FIB stack.
5. If you import images without scaling information, deactivate the Auto checkbox for XY-
Scaling and manually enter the information.
Note: ZEN currently cannot determine automatically if scaling information is present.
6. You can set the slice distance manually if you deactivate the Auto checkbox for Z-Spacing.
This step is optional and should only be done if you have reason to believe the information
calculated with the metadata is incorrect. Leave the Auto checkbox activated and the slice
distance is automatically calculated with information saved in the metadata of the images.
Note: When you set the slice distance manually, the information in the metadata is ig-
nored.
7. If you know the angle of your sample, deactivate the Auto checkbox for Sample Angle
and enter it. Otherwise the tilt for the image is calculated using the metadata, or the sam-
ple angle is set to the default of 54 degrees (default angle between FIB and SEM column at
the Crossbeam) and the image is rendered with a 90 degree tilt (if no information is avail-
able in the metadata).
8. If you want to consider the xy offset metadata of the individual slices for the import, acti-
vate the checkbox Read XY Offsets. Note that this can lead to a slanted z-stack depend-
ing on the sample and the metadata, assuming tilt correction was used during acquisition
with SmartFIB (e.g. if the metadata contain incorrect offset information).
9. Click on Apply.
The FIB stack is now imported into ZEN and a czi-file is created.
Note: When importing larger image files, it may take a while until the entire stack is visible in the
viewer.
Topography Acquisition
With the Topography Tool you can acquire and inspect surfaces or surface structures of differ-
ent sample types (e.g. water plates, solar cells). By the help of the Topography Measurement
Wizard you can perform a confocal image acquisition of your sample. The advanced analysis of
the topography image is performed by the help of the ConfoMap software which is included in
the software package.
In the chapter Functions & Reference you find detailed functional descriptions of the Topogra-
phy Tool [} 806] and the Topography Measurement Wizard [} 806].
In the chapter Workflow Topography Acquisition you find a detailed how-to guide for topog-
raphy acquisition, see Introduction.
If you want to acquire a topography image in ZEN, you need to start the topography measure-
ment wizard.
3. Click on Set stage position to define the anchor point of your tile scan at the current
stage position.
4. In the Navigation (must be expanded first) section click on the arrow buttons to navigate
through the single tile images. This can help to check if the tiles cover the full ROI.
5. In the Acquisition Mode tool, adjust the settings for Objective, Frame Size, and Bits per
Pixel according to your requirements.
6. Focus on the surface. The default pinhole setting is Max. This allows you to easily focus on
the surface just like with a widefield microscope. Of cause focusing via the eyepieces is also
possible.
7. In the Channel tool, adjust the pinhole to the required diameter. For best performance of
the system we recommend setting the pinhole size to 1 Airy Unit (1AU button). Since the
depth of field is reduced in confocal images you most probably need to slightly adjust the
focus position.
8. Adjust the intensity of the laser. To do so, focus on the brightest section of the z-stack.
Then adjust the laser either via the 405 nm slider or the Master Gain slider. Adjust the in-
tensity so that you do not see any overexposed pixels (shown in red).
9. In the Z-Stack tool, set up the Z-Stack by adjusting the z-range (upper and lower limit of
your z-stack). We recommend using First/Last mode for that.
à Make sure the whole surface, you want to measure, is within these limits. In case of a tile
scan you can use the navigation arrows in the Tiles tool to jump to the different tiles to
check.
10. Click on Next.
à The wizard moves to Step 2/2 Z-Stack Acquisition and starts the image acquisition.
à After image acquisition the topography image (height map) will be automatically gener-
ated. You will see the image in the center screen area after the acquisition and the pro-
cessing is finished.
11. Adjust the thresholds by using the Noise Cut slider, if necessary. Note that pixels below the
lower threshold are displayed in blue color and pixels above the upper threshold in red
color.
If you want to perform layer thickness measurement in ZEN, you need to start the topography
measurement wizard.
3. Focus on the surface. The default pinhole setting is Max. This allows you to easily focus on
the surface just like with a widefield microscope. Of course focusing via the eyepieces is
also possible.
4. In the Channels tool, adjust the pinhole to the required diameter. For best performance of
the system we recommend to set the pinhole size to 1 Airy Unit (1AU button). Since the
depth of field is reduced in confocal images you most probably need to slightly adjust the
focus position.
5. Adjust the intensity of the laser. To do so, focus on the brightest section of the z-stack.
Then adjust the laser either via the 405 nm slider or the Master Gain slider. Adjust the in-
tensity so that you do not see any overexposed pixels (shown in red).
6. In the Z-Stack tool, define the z-stack by adjusting the z-range (upper and lower limit of
your z-stack). We recommend to use First/Last mode for that.
à Make sure that all layers, you want to measure, are within these limits. In case of a tile
scan you can use the navigation arrows in the Tiles tool to jump to the different tiles to
check.
7. Click on Next.
à The wizard moves to step 2/3 Sectioning and starts the image acquisition.
à After image acquisition and processing you will see the image in the center screen area.
8. Click on X-Z Layer or Y-Z Layer to create the corresponding cross section you want to an-
alyze in detail.
9. Click on Next.
à The wizard moves to step 3/3 Measurement.
à The selected cross section will appear on the right sight of the screen.
à A table with the measurement data will appear on top of the profile.
12. Click on Add measurement to table to transfer the data to the result table on the left.
Within the table you are able to rename the layers and calculate the true layer thickness by
adding the refractive index.
13. Finally you have three options how to save your results:
- If you click Save table, the result table is saved on the file system.
- If you click Save selected profile, the drawn in profile is exported for further investiga-
tion in third party software.
- If you click Save Z-Stack, the raw data of the z-stack is saved on the file system.
You have successfully measured the layer thickness of your sample, saved the results table, the
profiles and the raw data of the z-stack.
See also
2 Layer Thickness Measurement Wizard [} 808]
Parameter Description
Image Acquisition If this option is selected you can acquire an image by the help of the
Topography Measurement wizard.
The wizard guides you through the image acquisition. Then you can
export the image to ConfoMap for further processing.
The wizard is started by the Start button at the bottom of the tool.
Load Z-Stack If this option is selected, you can load an existing confocal z- stack
from the file system.
Therefore simply click on the folder icon and select the z-stack image
from the file system. If you click the Start button, the Topography
wizard will directly switch to the last wizard step to apply the noise
cut and transfer the data to ConfoMap.
Automatic Noise If activated, the wizard will automatically perform the noise cut filter-
Cut ing with the preset parameters in the tool.
If not activated, the noise cut filtering is an additional step in the To-
pography Measurement wizard.
- Thresholds Here you can define thresholds for 8-bit and 16-bit images.
See also
2 Layer Thickness Measurement Tool [} 808]
2 Layer Thickness Measurement Wizard [} 808]
2 Acquiring Topography Images [} 798]
After selecting Image Acquisition in the Topography tool, the Topography Measurement
wizard consist of two steps:
Note that when you have selected Load Z-Stack in the Topography tool and you start the wiz-
ard, this step is not available.
Info
The noise cut filtering will be skipped if the checkbox Automatic Noise Cut in the Topogra-
phy tool is activated. The preset parameters will be immediately applied to the image and the
wizard continues with the export to ConfoMap.
See also
2 Topography Tool [} 806]
2 Layer Thickness Measurement Wizard [} 808]
2 Acquiring Topography Images [} 798]
After the wizard is started, you will see the live image (Continuous mode) from your sample in the
Center Screen Area. You can setup the acquisition parameters in the left area of the screen. In the
next chapters the individual steps are described.
Note that these are default ZEN blue tools customized for acquisition of topography images, see
chapter Acquisition Setup Tools.
See also
2 Tiles [} 812]
2 Channel [} 811]
2 Acquisition Mode [} 810]
2 Z-Stack [} 812]
Parameter Description
Noise Cut Here you can adjust the noise cut parameters manually. Therefore use
the slider or enter the lower and upper values in the input fields.
Save Surface Save the image of the surface to the file system.
Export to Con- Exports the image to the ConfoMap software. The ConfoMap soft-
foMap ware will be started automatically after you have clicked the button.
Parameter Description
Image Acquisition If this option is selected, you can acquire an image by the help of the
Layer Thickness Measurement wizard.
The wizard guides you through the image acquisition. Then you can
export the image to ConfoMap for further processing.
The wizard is started by the Start button at the bottom of the tool.
Load Z-Stack If this option is selected, you can load an existing confocal z- stack
from the file system.
Therefore simply click on the folder icon and select the z-stack image
from the file system.
Start If you click on this button, the Layer Thickness Measurement wiz-
ard (see chapter > [} 808]) is started.
See also
2 Topography Tool [} 806]
2 Topography Measurement Wizard [} 806]
2 Measuring Layer Thickness [} 802]
After selecting Image Acquisition in the Layer Thickness Measurement tool, the Layer
Thickness Measurement wizard consist of three steps:
See also
2 Layer Thickness Measurement Tool [} 808]
2 Topography Measurement Wizard [} 806]
2 Measuring Layer Thickness [} 802]
After the wizard was started, you see the live image (Continuous mode) from your sample in the
Center Screen Area. You can setup the acquisition parameters in the left area of the screen. In the
next chapters the individual steps are described. Note that these are default ZEN blue tools cus-
tomized for acquisition of layer thickness measurement images.
See also
2 Channel [} 811]
2 Acquisition Mode [} 810]
2 Z-Stack [} 812]
Parameter Description
X-Z Layer (green) If selected the X-Z section will be used for further investigation.
Y-Z Layer (red) If selected the Y-Z section will be used for further investigation.
Cut Lines Here you adjust the position of the desired cut line. Therefore use the
slider or directly enter the position in the input field.
Mid button If you click on this button, the corresponding cut line will be posi-
tioned in the middle (center) of the image.
Line Width Here you adjust the line width of the corresponding cut line. Default
value is 1 (in pixel).
Parameter Description
Profile mode Here you can select the profile mode for the measurement:
- Arrow If selected, you can draw in an arrow in the profile. This will measure
the profile along the drawn in line of the arrow.
- Rectangle If selected, you can draw in a rectangle to the profile. This will mea-
sure the profile in the whole area of the rectangle.
Add measurement If you click on this button, the measurement result will be added to
to table the measurement table at the left side. In this table you can rename
your profile and correct the result by the refractive index.
Save selected pro- If you click on this button, the selected profile will be saved as .czt
file file.
Save Z-Stack If you click on this button, the Z-Stack image will be saved as .czi file.
Save table If you click on this button, the table will be saved as .czt file.
Here you adjust scanning and acquisition parameters that you want to apply for the entire experi-
ment.
Parameter Description
Frame Size Adjust the frame size (in pixel) of the displayed image by entering
the desired value in the two input fields.
To change the frame size you must stop the live image acquisi-
tion.
- Presets button By clicking on this button you can select from a list of default
frame sizes (e.g. 128 x 128 or 512 x 512).
In case of topography and layer thickness measurements we rec-
ommend starting with 1024 x 1024.
Scan Speed Set the scan speed by adjusting the slider from 1 (slow) to 16
(very fast).
Note that the available maximum scan speed depends on the se-
lected Frame Size and zoom factor. In Airyscan MPLX mode the
speed is indicated in frames per second, an estimate of pixel time
and pixel number (without scanner return time). The actual frame
rate might be lower especially with very small frames.
- Confocal button By clicking on this button the frame size (image resolution) will
be set to an optimal value corresponding to the optical magnifi-
cation (objective) and the zoom factor.
This provides an image where no information is lost and no
empty data are generated as optimal sampling is achieved. The
optimal value is calculated for the given objective and magnifica-
tion settings matching a 2-fold oversampling according to 2 fold
Nyquist. Rectangular image dimensions are preserved.
- Unidirectional The laser scans in one direction only, then moves back to scan
the next line.
- Bi-directional The laser also scans when moving backwards, i.e. the scan time
is halved.
In case a pixel shift between forward and backward movement
(double image), resulting from bidirectional scanning, is visible,
use the Correction X/Correction Y sliders to correct it.
By clicking on the Auto button, an automatic scan correction will
be performed.
Averaging
Parameter Description
§ Sum Intensity:
Uses the sum of all images.
- Bits per Pixel In the drop-down list you can adjust the color bit depth to 8 Bit
or 16 Bit (i.e. 256 or 65536 gray values).
To change the bit depth you must stop the live image acquisi-
tion.
Scan Area In this section, you can adjust the position of the scan area.
The outer frame corresponds to the field of view of the micro-
scope.
The inner frame represents the scan area. All changes (Offset,
Rotation, Zoom) made in this section will be immediately applied
to the scan area.
Following functions are available:
- Rotation Adjust the rotation degree by using the Rotation slider. You can
also enter a specific value in the input field. By clicking on the O
button behind the input field, the rotation degree will be reset to
default position (zero degree).
- Zoom Adjust the zoom level (from 0.5x - 40x) by using the Zoom
slider. You can also enter a specific value in the input field.
If you click on the 1/2 button behind the input field, the zoom
level will be reset to default (0.5x).
8.6.8 Channel
With this tool you control and adjust the laser. The following parameters are available:
Parameter Description
405 nm Here you can set the required attenuation (in %) of the laser using the
slider, the arrows, or typing in the input field.
Parameter Description
- Max Opens the pinhole to its maximum diameter.
(Default)
This is also the default setting for the pinhole. This allows you to easily
focus on the surface just like with a widefield microscope. Of cause
focusing via the eyepieces is also possible.
Master Gain Here you can control the voltage of the PMTs. Higher voltage in-
creases the gain of the PMT. The image becomes brighter and you
may be able to reduce the laser power. At higher voltage, the noise
level in the image increases.
The optimum between gain and noise depends on your experimental
requirements and on your sample. The maximum available voltage for
multialkali PMTs is 900V.
Digital Offset Here you can perform adjustments on the background of the image.
Digital Gain Here you can digitally amplify the laser signal.
8.6.9 Tiles
Here you setup the tiles acquisition. The following parameters are available:
Parameter Description
Number of Tiles Here you can enter the desired number of tiles in X- and Y-direction.
Set stage position If you click on this button, the current stage position is defined as
starting position for the tiles acquisition.
Start position Here you can define the alignment of the tile scan in respect of the
mode defined position.
- Center If selected, the start position is the center of the tile scan.
- Upper Left If selected, the start position is the upper left corner of the tile scan.
Start position Displays the X and Y value of the starting position for the tiles acquisi-
tion.
8.6.10 Z-Stack
Info
Z-stack images are always acquired from bottom to top automatically, irrespective of whether
you have defined the top or bottom Z-plane of your stack as the first Z-plane. This acquisition
sequence increases the accuracy of the Z-positioning.
Parameter Description
First/Last If activated, you are able to configure the Z-stack via setting the first
and the last positions of the Z-stack, see Configuring a Z-Stack Man-
ually (First/Last Mode) [} 396].
Center If activated, you are able to configure the Z-stack via setting the cen-
ter plane of the Z-stack, see Configuring a Z-Stack Manually (Center
Mode) [} 396].
Depending on which mode you have activated, you will see the following parameters for config-
uring the Z-stack:
Parameter Description
Set Last/Set First Only visible for First/Last mode.
By clicking on the Set Last and on the Set First button you deter-
mine the current position as last or first position of the Z-stack.
Range Displays the range of the configured Z-stack from the last to the first
section plane.
Slices Here you can enter the number of Z-slices that the Z-stack will have.
Interval Here you can enter the desired distance between the Z-slices.
Optimal The number on this button shows the distance calculated for the
channels set and the current microscope according to the Nyquist cri-
terion. If you click on the button, this value is automatically adopted
into the Interval input field.
Keep § Interval:
Keeps the set interval between the section planes constant if you
change configuration parameters in the Z-Stack tool.
§ Slice:
Keeps the set number of Z-slices constant.
9 Reference
9.1 Menus
Open... Opens the Open Document dialog window. Here you Ctrl+O
can select the file you want to open.
Save As CZI Saves the selected file under a new name. In case of an Ctr-
image only .czi file format can be used. l+Shift+
S
Save As with Op- Saves the selected file under a new name. Advanced
tions... options can be selected:
File type: czi, jpeg, jpg, png, tif, tiff, bmp, gif, wmp,
wdp
Compression (only for czi and jpg/jpeg):
§ Original: The image keeps the compression of the
original image.
§ Uncompressed: The image is saved without com-
pression.
§ Compressed (JPEG XR): An uncompressed image
will be compressed with the selected quality. A com-
pressed image keeps the compression.
§ Force Compression (JPEG XR): A compressed im-
age will be decompressed and compressed with the
selected quality.
Send to arivis Pro This function is only available, if arivis Pro is installed
and has a valid license.
When using this command, arivis Pro opens and the
current image is imported into arivis Pro. The current
display settings in ZEN are preserved as much as possi-
ble.
Note: special CZI image types such as ApoTome raw
images, Airyscan raw images, or some other image
types are not always supported in arivis Pro. In these
cases convert them into normal processed images be-
fore sending them to arivis Pro for further analysis.
Add to ZEN Connect Adds the image to the currently opened ZEN Connect
Project project.
Open File Browser Opens the file browser in the Center Screen Area. Ctrl+F
Open Containing Opens the folder in which the selected file is located.
Folder
ZEN Data Storage Only available if you have a connection to ZEN Data
Storage and the module is activated in Tools > Op-
tions.
§ Open Image
Displays a dialog to open an image from ZEN Data
Storage.
§ Save Image
Saves the current image to the ZEN Data Storage.
§ Open ZEN Connect Project
Displays a dialog to open a Connect project from
ZEN Data Storage.
§ Save and Convert ZEN Connect Project
Saves the current Connect project to the ZEN Data
Storage.
Recent Files... Opens the Recent Files dialog window. The Recent Ctrl+R
Files dialog displays the files you have used previously,
separated according to file type.
Print Preview Opens the Print Preview [} 818] dialog for the selected Ctrl+F2
file.
See also
2 Image Export [} 120]
Here you can create different types of new, empty documents (images, tables, and ZEN Connect
projects).
Select the desired document type and click on OK. The image or table will be generated and
opened in the current workspace.
Parameter Description
Images Creates a new, empty image file (*.czi).
- Document Here you can enter the name of the new table-document.
name
- Columns Enter the number of columns that you want the new table to have in
the input field.
- Rows Enter the number of rows that you want the new table to have in the
input field.
- Column Name Here you can enter the title of the column.
- Column type Here you can select the desired data type of a column. The following
types are available:
§ String
§ Integer
§ Real
- Default Value Here you can enter a default value that you want the column cell to
contain.
Parameter Description
ZEN Connect Creates a ZEN Connect project. For more information, see Creating
Project a ZEN Connect Project [} 625].
Parameter Description
Printer Selects the printer you want to use.
Properties Opens a dialog containing the printer properties where you can con-
figure advanced settings. This dialog window depends on the printer.
Width Displays the width of the page according to the chosen format.
Height Displays the height of the page according to the chosen format.
Pages per sheet Selects how many pages you want to print on one sheet.
Number of copies Defines the number of copies that you want to print.
Auto fit Activated: Adjusts the size of the report or image to the size of the
page.
Scale pages Activated: Adjusts the size of the report or image to the factor set in
the input field to the right. Here you can set the desired enlargement/
reduction factor for the report or image.
A factor of 100% corresponds to the Auto fit option.
Auto fit Selects the zoom factor with which the page view is displayed in this
dialog.
Paste Inserts the copied graphic element into the active im- Ctrl+V
age. Shift+In
s
Select All Selects all graphic elements drawn into the image. Ctrl+A
ROI (Region of Inter- Draw a certain rectangular region that is of particular Ctrl+U
est) > Draw Region interest to you into the image. The ROI is displayed
of Interest with a red "marching ants" line. You can draw several
regions into an image.
ROI (Region of Inter- Draw a certain rectangular region that is of particular Ctr-
est) > Draw Rotat- interest to you into the image. This region can be ro- l+Shift+
able Region of Inter- tated and is displayed with a yellow "marching ants" R
est line. You can draw several regions into an image.
Note: When used on whole slide images, this function
cuts out the pixel data encompassed by the line. This
can create images with tiles of different sizes. Such im-
ages can be incompatible with certain processing func-
tions. You can instead use the Create Image Subset
processing function and activate the Keep Tiles to
keep all tiles be of the same size.
ROI (Region of Inter- Creates new image documents from the selection re- Ctr-
est) > Create Subset gions you have drawn in. All dimensions of the image l+Shift+
Images from ROI are taken into account here. This function works for C
both the non-rotatable and rotatable ROIs (the red and
the yellow regions).
Note: When used on whole slide images, this function
cuts out the pixel data encompassed by the line. This
can create images with tiles of different sizes. Such im-
ages can be incompatible with certain processing func-
tions. You can instead use the Create Image Subset
processing function and activate the Keep Tiles to
keep all tiles be of the same size.
Create Image from Creates an image from the current view and opens it
View as a new document in ZEN.
Player Here you can navigate through a Z-stack or a time series im-
age.
Text View Displays the text name of a file in the Document bar.
Small Thumbnail View Displays a small preview image and the name of a file in the
Document bar.
Large Thumbnail View Displays a large preview image and the name of a file in the
Document bar.
Shared View Controls General and specific view controls are shared for all contain-
ers and are active for the currently selected image container.
Separate View Controls Each container has its own separate general and specific view
controls that become active when the associated image con-
tainer is selected.
Show All (Global) Activates the Show All mode in every tool.
Show Macro Environment Activates the Macro Tool [} 979] in the Right Tool Area. If
you have licensed the Macro Environment module, the
Macro menu appears in the Menu bar. The Macro Environ-
ment is deactivated by default.
Draw Region of Interest Allows you to draw in a region of interest (ROI). Crtl+U
Info
This menu is available only if you have licensed the Macro Environment module.
Activate the Macro Environment controls in the View menu by clicking on Show Macro Envi-
ronment.
The macro editor represents the IDE (Integrated Developer Environment) to edit, execute, debug
and manage macros. It is started via the menu Macro > Macro Editor or from the Macro tool in
the Right Tool Area.
1
2
3
1 Menu bar
For a detailed description of the menus, please read Macro Editor Menus [} 824].
2 Tool bar
With the icons you can quickly access the most important functions, like saving or editing
macros.
3 Button bar
Here you find the buttons to record and control macros. For more information, see But-
ton Bar [} 826].
4 Macro list
Displays all macros and folders. The macros can be moved into a different folder via drag
and drop. Macros and folders can be edited via right-click context menu, see also Macro
List Context Menu [} 827].
5 Code Window
The central area of the Macro Editor shows the program code of the selected macro. Edit
and write your macros in here. You can either use the Record button or type in the pro-
gram code directly. Also a multi-document view is available, meaning that you can open
several code windows at once.
File Menu
Edit Menu
Replace Replaces the detected text with the new text.. Ctrl+H
Record Menu
Debug Menu
Start Without Debug- Executes the macro up to a breakpoint or error without Ctrl+F5
ging debugging.
Set Line To Execute Sets the pointer in the next active command line. F8
Help Menu
Macro Object Opens the Macro Object Model Online Help dialog.
Model… This documentation includes descriptions of all objects
available for the macro editor.
GitHub Opens the ZEISS GitHub page for OAD in your web
browser.
Internet access required.
On this bar you find the buttons to record and control macros.
Parameter Description
New Creates a new empty macro.
Debug Starts the debugger and executes the macro up to a breakpoint or er-
ror.
Step Over Starts the debugger stepwise, command by command, without step-
ping into function blocks.
Step Into Starts the debugger stepwise, command by command, and steps into
function blocks.
Parameter Description
Step Out Starts the debugger stepwise, command by command, and steps out
of function blocks.
Set Line Sets the pointer in the next active command line.
Folder menu
This context menu is only displayed if you right-click a folder in the Macro list.
Parameter Description
Add Macro Adds a new macro to the folder.
Replace with Con- Deletes the folder and moves all macros in the folder into the next
tent higher folder.
Macro menu
This context menu is only displayed if you right-click a macro in the Macro list.
Parameter Description
New Creates a new macro.
Diagnostics Opens the Diagnostics dialog. There you receive detailed re-
ports on your entire system state.
Ctrl+Shift+D
Kitchen Timer… Opens the Kitchen Timer dialog. There you can set a time
period after which an alert is played.
Dosimeter… Opens the Dosimeter dialog. There you can set multiple time
points at which an alert is played.
Users and Groups... Opens the Users and Group Management [} 829] dialog.
System Maintenance and Opens the System Maintenance and Calibration dialog.
Calibration...
Helps to keep your system in perfect working condition.
Here you can activate or deactivate the modules for which you currently own a license. Note that
all the changes made here are implemented immediately.
Parameter Description
Available Products Here you can see the products available for your license. Click on the
relevant button to select the product.
Included Modules In this list you can activate/deactivate the modules that are included
with your product.
Activate the checkbox to activate the corresponding module.
Optional Modules In this list you can activate/deactivate the modules that you have li-
censed as an option for your product.
Optional Hardware In this list you see the hardware that you have configured.
Save Information... Saves the current selection of modules within a .txt file.
Here you can create new users and groups and manage their access rights. Activate the user and
group management by activating the checkbox Enable User Management. For more details see
also User Management [} 42].
See also
2 Creating a New User [} 43]
2 Adding Users to a Group [} 45]
2 Managing Access Rights for User Groups [} 46]
Here you can customize the application layout, e.g. adopt the toolbar or shortcuts.
To learn more about how to customize the application, read the chapter Customizing Toolbar
[} 39].
Toolbar Tab
Here you can add menu items to the Toolbar as buttons for a quick access.
Parameter Description
Available Toolbar In this list you see all menu items that you can add to the Toolbar.
items
Adds a selected item to the tool bar. It then appears in the Selected
Toolbar Items list.
Add
Selected Toolbar In this list you see all the added menu items. Select the items here in
Items order to sort them.
Parameter Description
Deletes a selected item from the Selected Toolbar Items list.
Delete
Down Moves a selected item one position down in the Selected Toolbar
Items list.
Separator Inserts a vertical separator into the Toolbar after the currently se-
lected item of the Selected Toolbar Items list.
Close Closes the Customize Application dialog and saves the adjustments.
Shortcuts Tab
Parameter Description
Available Com- In this list you see all commands from the Menubar and edited
mands Macros. Klick on the arrow on the left of the entry to show available
commands.
Shortcut for the If you have selected a command from the Available Commands list,
selected item: the related shortcut is displayed here.
If the field is empty, no shortcut yet exists for the selected command.
Type a shortcut Here you can type in a shortcut by clicking on the desired keys of your
keyboard.
If a shortcut is already used for another command, it is displayed in
the Shortcut is used by: display field.
Shortcut is used If you typed in a shortcut which is already used, the related command
by: is displayed here.
All commands In this list you see all the shortcuts and their related commands.
with shortcuts:
Close Closes the Customize Application dialog and saves the adjustments.
Parameter Description
Available Items list In this list you see all items from the Menubar, Hardware Settings
and edited Macros. Klick on the arrow on the left of the entry to
show available items.
Soft Keys The items from the Available Items list can be assigned to the but-
tons Function0-Function9 via drag & drop.
Parameter Description
Close Closes the Customize Application dialog and saves the adjustments.
In this dialog several categories of calibration files and references are stored. These entries are
mainly for your information. You can update the list, delete entries, write comments and copy the
list to the clipboard. For Axioscan 7 clinical, many categories are empty since they are not rele-
vant.
Parameter Description
Active Scaling: Displays the currently set scaling.
Select Automati- Activated: Calculates the scaling automatically from the microscope
cally and camera configuration.
Parameter Description
Scaling List Selects a scaling. The available scaling are the ones which are stored
on your system, e.g. Pixel, Theoretic. The list also includes the scal-
ings you created manually.
The scaling details are displayed in the fields below the list. If a display
field is empty, it is not used in the calculation of the scaling.
- Activate Scal- Activates the selected scaling. The scaling will be applied to all images
ing that are acquired from this time point onward.
- Import Opens the Import Scaling dialog window where you can select a
scaling file (.czsc) that you want to import.
- Export Opens the Export Scaling dialog window to export the selected scal-
ing. Select the folder in which you want the exported scaling file to be
saved and specify a file name (.czsc).
Interactive Calibra- Opens the Open file for interactive scaling dialog, if there is no im-
tion... age selected yet.
Starts the scaling mode and displays the Scaling Wizard [} 832] in
the Center Screen Area for the currently selected image.
Here you can create a new scaling. To do this, draw a reference line with a predefined length into
the current image. An image of a calibration slide is best suited for this purpose.
Parameter Description
Tool Bar Here you can draw in two types of reference line. Therefor klick on
one of the following buttons.
Parameter Description
- The cursor is in selection mode. You can move the dialog window or
select a reference line to edit it.
Select
- Draw Parallel With that tool you draw two parallel lines along a distance with a
Reference Line known length. The two parallel lines correct errors in the parallel axis
resulting from the drawing of the lines. A third, corrected line is
drawn automatically from which the scaling is determined.
Automatic Line Activated: Automatically detects individual lines of the scale bar in
Detection the image close to the interactively defined distance. Using this
method the centers of the lines are determined exactly, increasing the
precision of the scaling.
Length Input Field Defines the length of the line you have drawn in.
Scaling Displays the calculated pixel scaling according to the drawn in line.
Name Defines the name for the scaling that will be created.
Save Scaling Saves the scaling that has been created with the specified name. The
scaling can be selected in the Scaling dialog for Available Scaling.
Here you can configure the settings for general software options.
Parameter Description
Select Automatically Activated: Automatically selects the user language of the oper-
ating system as the user language for the software.
Fixed Language Select the language from the dropdown list in which the soft-
ware will be run next time it is started.
Parameter Description
Show Application Activated: Displays the application selection dialog when the soft-
Selection ware starts.
Smart Application Activated: Displays the smart application selection dialog when the
Selection software starts.
Reload Last Used Activated: Reloads all image documents, which were open when you
Documents last exited the system when the software starts.
Parameter Description
Experiment Not relevant for Axioscan 7 clinical.
Selects the desired behavior on system start up with regard to the ex-
periment management on the Acquisition tab.
Request Stage/Fo- Activated: Shows a message which asks you to perform stage/focus
cus Calibration on calibration. This is not relevant for Axioscan 7 clinical. The stage and
Startup focus will always be calibrated automatically during start up.
Show Welcome Activated: Displays the Welcome Screen when the software starts.
Screen on Startup
Info
Axioscan 7 clinical
This section is not relevant for Axioscan 7 clinical. Naming is controlled using the Naming tool
on the Scan tab.
Here you can specify how images are (automatically) named and indexed. Changes will be stored
after the session is ended.
Parameter Description
Name Preview Displays the preview of the naming format that will be allocated
next for the selected category.
Category Selects the category of the file you want to be named automati-
cally, e.g. an image or an experiment.
Format Specifies what information you want to include to the file name.
The info icon offers a tooltip with a list of IDs you can add.
Digits Selects how many digits you want the counter to have.
Initial Counter Value Here you can enter the desired first value of the counter.
Save/Restore Counter Activated: Saves the counter values for the individual categories.
Value If the software is restarted, the values are restored.
For Axioscan 7, the storage path for images is controlled by the Storage Location tool on the
Scan tab.
Parameter Description
Auto Save after Snap Not relevant for Axioscan 7
Activated: Automatically saves images that are acquired on the
Locate tab using the Snap button.
Deactivated: Saves images that are acquired on the Locate tab
using the Snap button temporarily in the Auto Save Path\Temp.
The image will be marked with an asterisk and will be deleted if
it is closed without saving.
Show "Discard All" If activated, the Discard All button is displayed in the Save Doc-
Button in Dialog to uments dialog.
Save Modified Docu- NOTICE! If you click on this button all un-saved images
ments will be deleted.
See also
2 Auto Save Tool [} 890]
Parameter Description
Show Rulers Activated: Displays rulers at the top and left-hand edge of the
image – the units used are according to the scaling settings.
Auto Fit Activated: Automatically adjusts the zoom factor of the image
so that the entire image is visible and the view area is filled.
Set Logarithmic Scale Activated: On the Display tab the frequency distribution (y-axis)
in Histogram of the histogram is plotted using a logarithmic scale.
Show Viewport Scale- Activated: Shows a scale bar within a small window in 2D view.
bar in 2D View
Show Viewport Scale- Activated: Shows a scale bar within a small window in the Live
bar in Live Window window.
Show Navigator in 2D Activated: Shows the Navigator window in the image area.
View
Parameter Description
Use Pan Mode in 2D Activated: Pan mode will automatically activated for tiled im-
View for Tile Images ages opened in 2D view.
Baseline Shift for Activated: Adds an offset of 10.000 to the processed Airyscan
Airyscan Viewer images. This allows to display details in the processed Airyscan
image that have negative intensity values and are therefore nor-
mally cut from the histogram.
Display section
Parameter Description
Enable Tree View Additionally shows the Tree view in the Center Screen area.
Show Time series/ Activated: Time series or movie images acquired when the stage co-
movie images ordinate is adjusted are shown without the bounding X/Y area (black
without Bounding boarder).
X/Y Area
Image Rendering
Parameter Description
Use Advanced Activated: Uses the advanced renderer.
Renderer
Image Pyramid
Parameter Description
Generate Image Activated: Generates an offline image pyramid if there is no pyramid
Pyramid if needed available and the number of pixels is above the specified lower limit.
Deactivated: Does not generate an image pyramid. NOTICE! This
can have a negative impact on viewer performance.
Lower Limit in Pix- Sets the lower limit in pixels. The default value can depend on the
els ZEN profile. A message box is displayed if image size is bigger with a
request for pyramid calculation. Automatic calculation of image pyra-
mid is performed if the image size is bigger.
Note: Pyramids might be generated upon opening an image which is large enough or creating a
new image with a processing function.
Swapping
Parameter Description
Single File Swap- An experimental implementation of swapping. Swapping is used by
ping the application when processing extremely large files (terabyte sized),
like on Lattice Lightsheet systems. Do not activate this option other-
wise.
Activated: If the opened images are too large for the system mem-
ory, the surplus data is stored on the hard drive as one file instead of
a multitude of them. This prevents instability of the operating system.
– Default Activated: Uses the default directory. The path is displayed in the
field below.
– Custom Activated: Enables the control below to set a custom directory (by
clicking ).
3D View section
Parameter Description
Run Performance Runs a test routine which evaluates the performance of the graphic
Assessment card installed on the workstation. The result is an adjustment of the
precision and accuracy parameters to allow fluid interaction with the
rendered volume in 3D view.
Graphics Hardware Selects the performance class of your graphics hardware. A higher
Class performance class allows you to see your data in more detail but may
lead to crashes on unsuitable hardware. The following classes are
available:
§ Very Low
§ Low
§ Normal
§ High
§ Very High
Enable Prefetching Activated The viewer prefetches time points for time-based datasets.
for time-based
datasets
Show Logo Activated: Logo is displayed in the lower right corner of the 3D view.
Parameter Description
Export Logo Activated: Exports the logo in render series and snapshots.
General section
Parameter Description
Automatically Add Activated: Automatically adds a scale bar to the image, if it was ac-
Scalebar Annota- quired via the Snap button.
tion at Snap
Show a Request to Activated: Shows a dialog which asks you to move manual compo-
Move Manual or nents. You have to confirm the dialog and move the component by
Coded Hardware hand.
Components
Show a Confirma- Activated: Shows a dialog which asks you to confirm to delete a
tion Dialog for channel or a track.
Channel/Track
Deletion
Lock Device Con- Activated: Prevents controls in the right tool area from be undocked
trols in Right Tool during an experiment.
Area During Run-
ning Experiments
Camera/Live section
Parameter Description
Stop Live after Activated: Automatically closes the Live mode after an acquisition
Snap via the Snap button.
Stage/Focus Con- Enables to navigate the stage and focus in Live and Continuous
trol in Live/Contin- view.
uous View
Configure the travel speed of the focus by adjusting the values in the
corresponding fields from Very Slow = 0,005 to Very Fast = 50,0.
Reset your adjustments by clicking on the Default button.
Show Camera Ex- Activated: Shows advanced (expert) camera options on Locate tab
pert Options within the Camera tool.
Use Centered Cam- Activated: Positions a camera ROI at the center of the camera chip
era ROI only regardless of its size.
Centered Camera ROI = center of camera detector
Parameter Description
Acquisition Tab Activated: Enables the use and set-up of experiments without any
without channel channel support in the Acquisition tab.
support
Parameter Description
Prevent Execution Activated: Prevents execution of after channel setting automatism
of After Channel while Live mode is active.
Setting while Live
Mode is Active
Automatically start Activated: Starts the Live mode when the Set Exposure button has
Live Mode when been pressed such that the live image begins immediately after the
Exposure Measure- Set Exposure measurement is complete.
ment was Started Deactivated: Takes a Snap subsequent to Set Exposure.
Switch to next En- Activated: Automatically switches to the next enabled acquisition
abled Acquisition block in the Experiment Designer during a running experiment.
Block in Experi-
ment Designer
Enable Imaging Shows the Imaging Setup tool on the Acquisition tab.
Setup
Z-Stack section
Parameter Description
Adjust Auto-Z- Determines the degree of match between the image focus of the first
Stack Focus Match image and that determined as the true focus (center plane of the re-
on First Slice sulting Z-stack).
Adjust Auto-Z- Determines the degree of match between the image focus of the last
Stack Focus Match image and that determined as the true focus (center plane of the re-
on Last Slice sulting Z-stack).
Delay Time After Specifies a delay time after each focus movement during Z-Stack ex-
Focus Move periments in ms.
Parameter Description
Automatically Activated: Automatically starts the Live mode in the Center Screen
Start Live Mode in Area if you klick in the Acquisition tab in the Tiles tool on the Ad-
the Advanced vanced Setup button.
Setup View
Uncheck this option to prevent unnecessary specimen bleaching. The
default is not activated.
Automatic Snap by Activated: Acquires an image if you click on one of the frame's blue
Clicking the Live arrow icons. The Live Navigator tool moves one frame width in the
Navigator Buttons relevant direction. You can create tile images of your sample easily in
this way.
Parameter Description
Enable Stage In the Live navigator tool the current stage position including the live
Movement with image is shown as a frame outlined in blue. To move the frame, dou-
Live Navigator ble-click on the position to which you want to move it. Alternatively,
place the mouse cursor over the blue frame, press and hold the left
mouse button and drag the live navigator to the desired location.
Activated: Allows you to move the Live Navigator tool by dragging
it to a new location.
Show Stage and Activated: In the Tiles option, the setting to switch the backlash cor-
Focus Backlash rection on or off is shown. Per default it is hidden.
Correction Setting
in the Options
Delimiter for CSV Specifies the delimiter for a CSV export or import. Select Comma (de-
Export/Import fault), Semicolon or Tab.
Ask Whether Sup- When the support points and/or positions are determined by a soft-
port Points/Posi- ware autofocus run the existing points can be overwritten with the
tions Should be new Z values.
Overwritten
Activated: Shows a message box asking if the points should be over-
written if there is a autofocus Z value.
Focus Surface Out- Ignores support points that are significantly outside the interpolated
lier Determination focus surface.
You have the following setting options available:
- Maximum Inter- This value can be 0 or 1. If 1 then a linear fit is used to detect the out-
polation Degree lier support points. This is the default. If 0 a simple average value is
for Outlier De- used to detect outliers.
tection
Delay Time After Defines a delay period which is used for all stage movements in a tiles
Stage Movements and position experiment or movements controlled in the advanced tile
setup. The delay helps prevent movement in samples where, for ex-
ample, a large volume of liquid is present in the sample holder. It can
be used with the stage speed and acceleration options to optimise ex-
periments with this type of sample.
Binning Compen- Defines the power to which the binning ratio is modified to automati-
sation of Exposure cally determine the exposure time value used for a preview scan were
Time in Preview the binning setting between the experiment and preview scan differs.
Scans The default value is 2.0 i.e. quadratic. Thus, for example the exposure
time would be reduced by a factor of four if the experiment binning is
1x1 and the preview scan binning is 2x2. The value can be varied be-
tween 1.0 and 2.0 in steps of 0.1.
Parameter Description
- Use Imaging Activated: Default setting for the live image that allows navigation
Device from and focus interaction during the carrier calibration wizard.
Selected
Channel with
"Acquisition"
Settings
- Use Active This option is only relevant for systems with a wide field (camera
Camera with based) detector.
"Locate" Set- Activated: Allows you to alternatively apply locate camera settings
tings for use in the carrier calibration wizard (live image). By default the ex-
periment settings for the currently selected channel/track will be used.
Panorama section
Parameter Description
Automatically Activated: Specifies that the live mode will start running automati-
Start Live Mode in cally when you begin a panorama experiment.
the Panorama
View
Automatically Activated: Automatically moves the stage half a camera frame diago-
Move Stage/Live nally after acquisition of a snap image. Thus, the snap image can be
after an Acquisi- inspected.
tion
Enable Trans- Activated: Displays the selected tile image with a transparency effect
parency Effect on that enables you to see it in relation to the tiles underneath (lower
Selected Tile Im- layer = earlier acquisition) and those above (upper layer = more recent
age acquisition) at the same time.
You can also adjust the degree overlap of the panorama grid. The de-
fault value is 20%, changes require a re-start to become affective.
Note that this and the transparency effect parameters are only rele-
vant for manual stages.
Grid Overlap Specifies the degree of overlap for the panorama grid in %. A soft-
ware restart is required.
Show Live Activated: Note that the option should only be activated if you have
Panorama Acquisi- issues with the camera/live image during acquisition. It is only avail-
tion Options able for troubleshooting and normally not needed.
Not activated: Default. The software tries to use the determined
value for the current camera automatically. If you are unsure, if your
camera supports the functionality, we recommend leaving the default.
Parameter Description
Show a Dialog to Activated: Reminds you to make appropriate adjustments to the fo-
Prepare the Defi- cus prior for initialization at the start of experiments using Definite Fo-
nite Focus Initial- cus.
ization
Parameter Description
Enable Definite Fo- Activated: Specifies whether the last successfully determined z posi-
cus Stabilization tion is used if the primary focus action (Definite Focus or Autofocus)
on a Suitable Fall- fails.
back Position
Show Definite Fo- Activated: An additional section is displayed in the definite focus
cus Setting "Reso- strategy. This allows selection of three definite focus modes for
lution and Speed" greater speed or accuracy of the stabilization.
Parameter Description
Time until a dead- Sets the time until a deadlock of the synchronized script is assumed.
lock of the syn- Here you can adjust the time until ZEN assumes that a deadlock has
chronized script is occurred after a synchronized script has been executed. If this value is
assumed exceeded the function is aborted.
Parameter Description
Acquisition Online- Selects the compression for acquisition.
Compression
– None (uncom- Uses no compression. It is useful for low CPU load optimization (very
pressed) fast and short experiments).
– Zstd (lossless) Uses lossless zstd compression for reduced data rate and image size.
Enable Online MIP Activated: Enables the online MIP for fast data browsing after acqui-
sition.
Deactivated: Does not enable the online MIP for fastest data rate.
Parameter Description
Activate the online Specifies whether the disparity map is used to warp the LSM and
disparity warping Airyscan frames.
Path to disparity Displays the path where the disparity map is saved.
map
Parameter Description
– Nearest The output pixel is given by the gray value of the input pixel that is
Neighbor closest to it.
– Linear The output pixel is given by the gray value resulting from the linear in-
terpolation of the input pixels closest to it.
– Cubic The output pixel is given by the gray value resulting from a cubic poly-
nomial function interpolation of the input pixels closest to it.
Disparity Map Cali- Activated: Integrates the radial distortion for the calculation of the
bration Integrates disparity map.
the Radial Distor- Note: This option should not be deactivated because it will have a
tion negative effect on the quality of the results.
LSM section
This section is only visible for an LSM.
Parameter Description
Online Scanner Activated: Enables the online scanner correction. It ensures an opti-
Correction mal image quality at scan speeds > 13.
Bidi Auto Correc- Automatically corrects line shifts for bidirectional scanning for Multi-
tion plex Imaging modes
Set lasers OFF When activated any laser which is not activated for an experiment will
when unused for automatically be set to OFF mode after 30 minutes. This does not ap-
30 minutes (re- ply to multiphoton lasers.
quires ZEN restart)
Keep Airyscan Raw Only visible if the Airyscan Raw Data module is licensed and acti-
Data vated in Tools > Modules Manager.
Activated: All Airyscan data is stored in the 32ch format. Processing
with the Airyscan tools in the viewer and processing tabs is un-
changed, however, with large datasets the processing might take
longer, see Airyscan RAW Data [} 721].
Deactivated: The ring preprocessing is active and Airyscan data is
stored in a 4ch format, which allows faster processing. Also the Shep-
pard sum export is available in this format.
Enable Z-Piezo Activated: Uses the Z-Piezo drive (if available) for the acquisition of Z-
Stacks. Stacks with a range that is larger than the specified working
range of the piezo drive are automatically carried out using the micro-
scopes focus drive.
Z-Piezo Range Select the range of the Z-Piezo drive. Note that the precision in the
high range is lower compared to the default range.
Note: This function is not available with LSM 980. With LSM 980 the
range of the Z Piezo is always 500 µm with high precision.
Axioscan
This section is only visible for the Axioscan (ZEN slidescan).
Parameter Description
Unload Last Tray Activated: Unloads the tray after the scan is finished.
after Scan Job is
Finished
Use Separate Activated: Uses a separate folder for temporary files and displays the
Folder for Tempo- folder path. The button Change Folder opens a browser window to
rary Files set a custom path for temporary files, see Using a Separate Folder
for Temporary Files [} 1106].
Keep Separate La- Activated: Keeps the label image as a separate file after the scan is
bel Image File af- finished.
ter Finished Scan
Keep Separate Pre- Activated: Keeps the preview image as a separate file after the scan
view Image File af- is finished.
ter Finished Scan
Force Sequential Activated: Forces sequential processing for images acquired by line
Processing for Line scan.
Scan (BF)
Allow "All chan- Activated: Allows the use of the dimension order "All channels per
nels per tile" Di- tile" for xPol and pPol, even if this shortens the lifetime of the con-
mension Order for denser.
xPol and pPol De-
spite Shortened
Lifetime of Con-
denser
Delimiter for CSV Selects the delimiter for imported csv files from the dropdown. The
Import following options are available:
§ Comma
§ Semicolon
§ Tab
§ Space
Minimum Margin Sets the minimum margin for the distribution of focus points in a tile.
of Focus Point Dis-
tribution in Tile(s)
Maximal Allowed Sets the maximal value for pixel blur that is allowed in flash acquisi-
Pixel Blur in Flash tion.
Acquisition
Frame Rate Limita- Sets a frame rate limitation for the acquisition monitoring.
tion for Acquisi-
tion Monitoring
Here, you can enter user and company information. These are written into the image metadata
during acquisition.
Parameter Description
User Information Type in contact information of the software user here.
Logo Upload a company logo to the company profile here. Therefore click
on the button.
Parameter Description
Import the first im- Activated: Uses the first imported row as the caption for the
ported row as column column.
caption
Import the second im- Activated: Uses the second imported row as the unit for the col-
ported row as column umn.
unit
Automatic CSV format Activated: Automatically tries to detect the format of the data
detection table when importing the table to the software.
Import datapoints from Defines the starting row of the data table into which the data
the file starting from will be imported.
datarow
Use column, decimal Activated: Uses the settings which are configured in the Win-
and list separator from dows regions settings when importing a table to the software.
windows regions set-
tings
Column Separators Only available if you have deactivated Automatic CSV format
detection and Use column, decimal and list separator from
windows regions settings.
Configures the import options according to the format of your
data table you want to import, e.g. specify the type of column or
decimal separator.
Decimal Separator Only available if you have deactivated Automatic CSV format
detection and Use column, decimal and list separator from
windows regions settings.
Selects which decimal separator should be used.
Thousands Separator Only available if you have deactivated Automatic CSV format
detection and Use column, decimal and list separator from
windows regions settings.
Selects which thousands separator should be used.
Parameter Description
Number of Deci- Sets the maximum number of decimal places for the numbers im-
mal Places ported into the data table.
Parameter Function
Show Inherited Activated: Shows the inherited members of the ZEN class in a pop-up
Members in Pop- window.
up
Disable Full Screen Activated: Disables the full screen mode during debugging.
Mode while De-
bugging
Parameter Description
Enable TCP Macro Activated: Enables to enter the TCP Port Number.
Allow IPv4 Nat Attention! For experts only. Do only activate this option if you know
Traversal what you are doing.
Macro Recorder
Parameter Description
Overwrite Interac- Activated: Overwrites the parameter for interactive execution of a
tive Recording function during recording with the macro recorder.
Flag
Parameter Description
Access Token Sets/displays your private access token which is required for the use of
the arivis Cloud functionalities within ZEN, see Creating and Entering
an Access Token [} 206].
Select Execution Selects if the execution should be done locally or on a remote server.
Mode Available options:
§ Use Local Docker Desktop
§ Use Remote Docker Host
Parameter Description
Default Execution Only visible if Use Local Docker Desktop is selected.
Location Selects a default location where the arivis Cloud module is executed
and the outputs are saved.
Mount NVIDIA Activated: Uses the NVIDIA GPU of your computer when executing
GPU models requiring a GPU.
Deactivated: Disables the use of the GPU. Note that you can only ex-
ecute models using CPU. Models that require a GPU will not work if
this option is deactivated!
Info
Ubuntu Distribution Required
If you want to execute your arivis cloud modules on a remote Linux machine, this remote ma-
chine needs to run an Ubuntu distribution.
Parameter Description
Remote Docker Sets the address and port of the remote Docker host.
Host API Address
Check Connection Checks if the remote Docker host can receive a request. A green
checkmarks indicates a successful connection, otherwise a red x is dis-
played.
Select Authentica- Selects the mode how you have to authenticate yourself to the re-
tion Mode mote host.
– Basic Authen- Displays a field to enter the Username and Password required for
tication the remote host. Also the checkbox to Use TLS (https).
– Certificate File Displays a field to browse for the Certification File and enter a Pass-
Authentica- word (optional) required for the remote host.
tion
– Windows Cer- Displays a field to browse for a Certificate from Windows Store re-
tificate Au- quired for the remote host.
thentication
Local to Remote Area to map a folder path from the local computer to the same path
File System Map- on the remote computer.
ping
– Path List Displays and maps the path of a folder on this machine to the path of
the same folder on the remote Linux machine.
Parameter Description
– Opens the dialog to add a new path mapping.
Export Remote Opens a file browser to export the current settings as an xml file.
Settings
Import Remote Opens a file browser to select and import an xml file with settings for
Settings the remote setup.
Advanced Settings
Displays advanced settings for arivis Cloud (on-site).
Parameter Description
Client Name Sets a custom client name for this computer.
Ports that can be Defines the ports which can be used by the Docker containers.
used for the
Docker Containers
See also
2 arivis Cloud (on-site) [} 206]
Parameter Description
Local Computer
Selects the path to the folder on this local computer. Click to
Path
open a file explorer and browse to the folder.
Remote Computer Defines the path the same folder on the remote computer.
Path
Check Mapping Checks if the two paths point to the same folder and displays if the
check is successful or not.
See also
2 Creating a Path Mapping [} 208]
Parameter Description
Data Compression Selects which data compression is used for images processed in
Batch mode in ZEN.
Parameter Description
– ZSTD (lossless) Uses lossless zstd compression for reduced data rate and image size.
On this tab you have the options to set up the communication between PCs for Direct Processing,
see Connecting Acquisition Computer and Processing Computer [} 222].
Setup Acquisition PC
In this section you can set up the acquisition computer if you use a discovery proxy, see Connect-
ing Computers With Discovery Proxy [} 226].
Parameter Description
Find From Discov- Activated: Displays the control to define a Discovery Proxy from
ery Proxy which you can get a list of available PCs for processing.
IP Address (this Displays the IP address of your computer and allows you to select the
PC) IP address used by this computer (if more than one network connec-
tion is active).
– Select IP Ad- Activated: Enables the selection of the IP address used by this com-
dress puter (if more than one network connection is active).
Setup Processing PC
In this section you can set up the processing computer.
Parameter Description
Announcement to Activated: Announces/registers itself to a Discovery Proxy server for
Discovery Proxy the communication between the computers, see Connecting Com-
puters With Discovery Proxy [} 226].
IP Address (this Displays the IP address of your computer and allows you to select the
PC) IP address used by this computer (if more than one network connec-
tion is active).
PC Description Here you can add specific information about the processing computer
that will be visible on the acquisition computer.
Send Hardware In- Activated: Displays the hardware information of this computer,
formation (op- which is also visible on the acquisition computer.
tional)
Parameter Description
Start Receiving on Activated: The computer starts receiving processing request by de-
Startup fault.
Deactivated: The computer does not start the receiving of processing
requests automatically on startup. The receiving has to be started
manually in the Direct Processing tool on the Applications tab.
Parameter Description
IP Address (this Displays the IP address of your computer. Selects the IP address used
PC) by this computer for the discovery proxy server (if more than one net-
work connection is active).
Start/Stop Uses your computer as Discovery Proxy. The button changes to Stop
to stop acting as Discovery Proxy.
IP Address to Clip- Only available if the Start button has been clicked.
board Copies the IP address of your computer to the clipboard.
Advanced Settings
Parameter Description
Communication Selects the mode used for communication between the PCs.
Mode
– WCF Uses the WCF communication mode. This mode is compatible with
ZEN versions prior to ZEN 3.7.
– gRPC Uses the gRPC communication mode. This mode is incompatible with
ZEN versions using WCF communication, i.e. all prior to ZEN 3.7. Thus
communication with older versions is not possible.
– Zstd Uses lossless zstd compression for reduced data rate and image size.
See also
2 Direct Processing [} 219]
Parameter Description
Stage Size
– Stage Size in Positions the images initially in a better way in the correlative
mm workspace. For example, for 130x100mm stage an image at stage po-
sition 65x50mm will be placed in the center of the correlative
workspace.
Parameter Description
Note that the stage size will only be taken into account for better ini-
tial positioning if you have calibrated the stage upon startup or later,
but before creating the ZEN Connect project.
3D Rotation
– Show Outlines Activated: Displays a yellow outline of a rotated image in the Correl-
of Rotated ative Workspace. This outline is only visible in alignment mode, if 3D
Cube Alignment is selected and Apply 3D Rotation is activated.
2D Image Sets
– Always dis- Activated: Always displays single plane data sets in the correlative
play single workspace, even if the z-value set by the Global-Z is out of range for
plane data the 2D image(s).
sets
Parameter Description
ImageJ Folder Shows you where the software is installed.
ImageJ executable Select your preferred .exe file in the drop down list.
Shift Pixels to 16 This function only works if under Default preferred conversion (see
bit below) the option Always is selected.
Activated: Shifts the grey value of a 10-bit or 12-bit image to 16-bit.
Deactivated: No shift takes place.
Default preferred You can specify the default for the preferred conversion.
conversion
Default preferred You can select the default setting for the preferred file format.
file format
Parameter Description
Available options in the drop down list:
§ Automatic
§ czi
§ Ome Tiff
§ Tiff
§ Tiff With Display Mapping
Here you can configure the settings for the ZEN Data Storage.
Parameter Description
Simple Displays the options for a general setup
– Host Name Displays and sets the host name of the server.
– Hosting Selects if you want to set up your server communication with the
Scheme http or the https protocol.
– Storage Server Displays and sets the port for the storage server.
Port
– Storage Server Displays and sets the url for the storage server.
URL
Validate Settings Validates the settings for the ZEN Data Storage and displays the status
below, see Validating the ZEN Data Storage Settings [} 270].
Reset to Defaults Resets all settings for the ZEN Data Storage to their defaults.
See also
2 Setting up ZEN Data Storage Server [} 269]
Here you can create collections for your data and specify their access.
Parameter Description
Collection Table Displays the existing collections.
Parameter Description
– Opens the Add Collection dialog to add a collection.
Add
See also
2 Add/Edit Collection Dialog [} 853]
2 Creating a Collection for Data [} 274]
2 Editing or Deleting a Data Collection [} 275]
Parameter Description
Collection Name Sets the name for the collection.
Table
– Access Level Displays and sets the access level of the user/group with the drop-
down. For a description of the available access levels, see Add Collec-
tion Access Dialog [} 853].
– Only available if you have started the application with active user
management.
Add Access
Opens the Add Collection Access dialog to add access for a user/
group.
– Only available if you have started the application with active user
management
Remove Ac-
Removes the access for the selected user/group from the collection
cess
and deletes it from the list.
See also
2 Creating a Collection for Data [} 274]
With this dialog you can select a group/user who should have access to the collection. It is only
available if you have started the application with active user management.
Parameter Description
Groups Displays all currently configured groups.
Access Level Sets the access level for the currently selected group/user with the
dropdown.
– Manage Grants the group/user access to modify the access control list.
OK Adds the selected group/user with the set access level to the collec-
tion and closes the dialog.
Cancel Closes the dialog without adding any group/user to the collection.
See also
2 Creating a Collection for Data [} 274]
Parameter Description
Maximum Temper- Displays or sets the maximum temperature deviation.
ature Deviation
Metadata Logging Sets the logging cycle metadata. A bigger logging interval is useful for
Cycle very long or very fast experiments.
See also
2 Full Screen Mode [} 1050]
Depending on the system configuration and the licensed modules this tab can have a different ap-
pearance. In general you can use the locate tab for finding or "locating" interesting areas on your
sample.
For mixed systems (e.g. LSM including a microscope camera) the section System Mode is avail-
able additionally.
In the Eyepiece mode (for confocal systems) this tab contains only functions for controlling the
light path and viewing the sample via the eyepiece, see Microscope Control Tool [} 980]. In the
Camera mode this tab contains more control elements and tools, see Tools on Locate tab
Mode Description
Eyepiece mode If you switch to Eyepiece mode the system adjusts the light path
automatically to the eyepiece. The following list shows the
changes in the ocular mode in detail:
§ All Action buttons are hidden
§ Only the Microscope Control tool is visible
§ Within the Microscope Control tool only the light path lead-
ing to the eyepiece is displayed.
§ All possible light paths to cameras are hidden.
Parameter Description
Off Closes the shutter of the transmitted/reflected light source on a mo-
torized microscope.
Switch To section
These buttons configure the light path either for transmitted light observation or fluorescence ob-
servation. The observation is either via Ocular or Camera which is set in the System Mode sec-
tion. The shutter for either observation path is not opened to avoid unwanted illumination of the
sample. Shutter for either mode can be operated directly on the microscope hardware or using
the controls in the Transmitted Light/Reflected Light section.
Mode Description
Transmission Click on the Transmission button to set the beam path in the micro-
scope for observation of the sample using transmission illumination.
This includes all motorized components of the microscope excluding
the shutter in front of the transmitted light source. On a fully motor-
ized microscope no further changes in HW are necessary apart from
opening the light shutter and setting the illumination strength of the
halogen lamp to achieve transmission observation of the sample.
Note: When switching to Acquisition the light shutter is closed.
When switching back to Locate the settings for Transmission obser-
vation will be restored.
Fluorescence Click on the Fluorescence button to open up the Add Dye or Con-
trast Method Dialog [} 869].
Chose the dye you want to observe in fluorescence mode, then close
the tool. The beam path is set for fluorescence observation including
all motorized components of the microscope excluding the shutter in
front of the reflected light source. On a fully motorized microscope no
further changes in HW are necessary apart from opening the light
shutter and setting the illumination strength of the reflected light
source to achieve fluorescence observation of the sample.
Note: When switching to Acquisition the light shutter is closed.
When switching back to Locate the settings for Fluorescence obser-
vation will be restored.
Favorites section
Parameter Description
Configure... Here you can configure further buttons with your favorite hardware
setting functions. Click on the button to open the Configuration dia-
log.
Parameter Function
Only visible if you have configured a motorized focus in the MTB (Mi-
croToolBox).
Find Focus
Starts an autofocus search using the current settings from the Soft-
ware Autofocus tool.
Opens Live View and shows the live image from the active camera or
the first channel of the first track when acquisition is performed with
Live LSM.
Starts a series of Snaps using the settings defined in the Light Path
and Camera tool. In contrast to a live image, the exact same camera
Continuous setting that has been set in the Camera tool is used. The result at the
end of this mode is a single, acquired image that can be saved.
Snap button
Parameter Description
Link Cameras Only active if you have connected two structurally identical cameras
to your system.
Activated: Acquires images using two cameras in parallel. This is of-
ten the case with 2-channel images for ratio measurements or FRET
measurements.
Active Camera Shows the active camera. If you have several cameras connected, you
can select the detector to use here.
Tools section
Depending on which modules you have purchased you see different tools available in this section,
see Tools on Locate tab.
See also
2 Configure your hardware setting favorites Dialog [} 859]
2 Microscope Control Tool [} 980]
Here you configure up to 20 new buttons to get quick access to your preferred camera and hard-
ware settings.
Info
To create and edit settings you need the settings editor. Click on Tools > Settings Editor.
Parameter Description
Favorite Settings If you have not yet defined any buttons, you will see an empty list
here. To create a new button, click on the Add button . Your
Favorites are displayed as buttons on the Locate tab in the Favorites
section.
You can configure your favorite setting in the input fields:
- Color Here you can select a color for the related button. Click on the color
dropdown list to choose a color.
- Use Color also Activated: Uses the selected color as the button text color.
for Button
Text coloring
Available hard- Here you see a list of all hardware settings that are saved on your
ware settings on hard drive. Select the hardware setting that you want to use with the
disc configured button. To add a hardware setting, simply drag & drop
them to the desired button configuration.
Available camera Here you see a list of all camera settings that are saved on your hard
settings on disk drive. Select the camera setting that you want to use with the config-
ured button. To add a camera setting, simply drag & drop them to the
desired button configuration.
Info
The content of the tab changes depending on the configuration of your imaging system and
the options that you activate or deactivate.
Settings that you configure in the top part of the tab have an effect on settings in the bottom
part of the tab. Settings that you configure in the Acquisition Parameter tool group, e.g. in
the Channels tool also apply to the acquisition of all images that you configure in the Multi-
dimensional Acquisition tool group.
2
3
1 Experiment Manager
The area above the blue tools where you can load and save your experiments, control ac-
quisition and decide which tools will appear in the certain tool groups.
For further information on Experiment Options, see Experiment Options [} 861].
2 Smart Setup
Opens the Smart Setup dialog, see Smart Setup [} 862].
3 Sample Navigator
The Sample Navigator wizard is a tool to find the focus plane and quickly acquire an
overview scan of your sample. You can also simplify the search for a region of interest
for the actual imaging experiment. For further information, see Using the Sample Navi-
gator with LSM 980 and LSM 900 [} 1209].
4 Action Buttons
With these buttons you control microscope and camera and acquire your images. For
further information on action buttons, see Action Buttons [} 861].
5 Acquisition Dimensions
Here you activate the desired dimensions (e.g. Z-Stack or Tiles) for your experiment. A
drop down menu appears when selecting the dimensions to determine the sequence of
acquisition. For more information, see Acquisition Sequence [} 871].
The corresponding fields to the right of each dimension show a preview of how exten-
sive the experiment will be (e.g. 9 Tiles).
Below the dimensions section you activate additional experiment features (e.g. Auto
Save) or special modules (e.g. Shuttle & Find).
6 Experiment Preview
Here you can see a graphical representation of the configured experiment. The Disc icon
indicates that you have enabled Auto Save function for the experiment.
With these buttons you control microscope and camera and acquire your images.
The Acquisition buttons on the Acquisition tab differ from the Acquisition buttons on the Locate
tab.
The buttons on the Locate tab relate to an individual image. The buttons on the Acquisition tab
relate to a multidimensional image with at least one channel.
1 2 3 4 5 6
1 Find Focus
Only visible if you have configured a motorized focus (MicroToolBox).
Starts an autofocus search using the settings from the Focus Devices tool. The autofo-
cus search is performed for the selected reference channel in the Channels tool.
2 Set Exposure
Starts an automatic exposure time measurement with the settings defined in the Light
Path and Camera tool.
3 Live
Starts the Live Mode. In the Center Screen Area you see the live image from the cam-
era.
4 Continuous
Starts a series of Snaps using the settings defined in the Light Path and Camera tool.
In contrast to a live image, the exact same camera setting that has been set in the Cam-
era tool is used.
5 Snap
A so called "Snap" acquires a single image (snapshot).
For widefield systems an additional tiles snap option is available. You can choose be-
tween 2x2, 3x3, 4x4 and 5x5 presets.
6 Stop
Only active if one of the acquisition buttons has been clicked.
Stops the function of the relevant acquisition button.
In the Options shortcut menu you can create new experiments and rename, save, import,
export or delete existing experiments.
Save Saves a modified experiment under the current name. An asterisk indi-
cates the modified state.
Save As Saves the current experiment under a new name. Enter a name for
the experiment.
Set As Startup De- Selecting this option will assign the currently loaded experiment as a
fault default experiment, which is loaded every time the software is started.
The startup default experiment is indicated by a special icon be-
hind the experiment name.
You can decide to start the software with a particular default experi-
ment, with the last used experiment or with an empty new experi-
ment in Tools > Options > Startup/Shutdown > Experiment.
Smart Setup offers you support when configuring multichannel acquisition experiments. To start
it click on the Smart Setup button on the Acquisition tab.
Select the fluorescent dyes and contrast techniques that you want to include in your experiment
from a large dye database. Smart Setup takes the configuration of your microscope hardware and
the properties of the selected dyes into account. Based on this information, it makes one or more
suggestions for acquisition. You can adopt these into your experiment as required and make fur-
ther changes to them there.
Info
If Smart Setup is unable to make a proposal, it is not possible to use the selected dyes, con-
trast techniques, or current microscope hardware to make acquisitions. Select other dyes or
another contrast technique or configure your acquisition experiment using the Acquisition
Mode tool and the Channels tool.
§ Smart Setup tries to configure the motorized components of your system for the acquisition
of multichannel images.
§ Smart Setup does not change any parameters of other acquisition dimensions (e.g. Z-stack,
Time series, or Multi-position acquisitions).
§ For widefield tracks it does not influence any camera parameters (e.g., Exposure time or Reso-
lution).
§ For LSM tracks it adjusts parameters within the Imaging Setup, the Acquisition Mode and
the Channels tool windows.
Depending on your system you will see two buttons on top of the dialog.
2 Detector Selection
Only visible if two or more cameras are configured for the system. Here you can select
the desired camera for the experiment.
3 Experiment Configuration
Here you can add up to four reflected light fluorescence channels and transmitted light
contrast techniques to your experiment. The added dyes or the contrast technique are
shown in the list below.
A click on opens the Add Dye or Contrast Technique [} 869] dialog, where you
can select the desired dye or contrast technique from the Dye Database.
4 Motif Buttons
Here you can optimize image acquisition regarding particular requirements like speed or
quality. All parameters, e.g. camera resolution or dynamic range in the Acquisition
Mode or the Channels tool, are set automatically. They essentially influence the camera,
detector, and lightning settings. For more information, see Motif Buttons (WF) [} 864].
Parameter Description
Automatic § The system tries to set the optimal resolution for the camera in the
Acquisition Mode tool. The resolution is calculated from the total
magnification at the camera adapter (objective + optovar +
adapter).
§ Sets the Exposure Time:
– Transmitted light channel: 10 ms
– Reflected light channel: 10 ms
– Fluorescence channel: 150 ms
§ Sets the Auto Exposure to Off
§ Sets Intensity to the following:
– Transmitted light channel: 70%
– Reflected light channel: 70%
– Fluorescence channel: 30%
Speed § If binning is supported, one binning category is set for the camera
under the optimal resolution in the Acquisition Mode tool. The
resolution is calculated from the total magnification at the camera
adapter (objective + optovar + adapter).
§ Sets power of Colibri-LEDs 100% and power of transmitted and re-
flected light sources to 10% or 3V.
§ Sets the Exposure Time:
– Transmitted light channel: 3 ms
– Reflected light channel: 3 ms
Parameter Description
– Fluorescence channel: 50 ms
§ Sets the Auto Exposure to Off
§ Sets the Intensity to the following:
– Transmitted light channel: 30%
– Reflected light channel: 30%
– Fluorescence channel: 15%
Signal § Sets the optimal resolution for the camera in the Acquisition
Mode tool. The resolution is calculated from the total magnifica-
tion at the camera adapter (objective + optovar + adapter).
§ Sets the power of Colibri-LEDs to 75% and the power of transmit-
ted and reflected light sources to 10% or 3V.
§ Sets the Exposure Time:
– Transmitted light channel: 20 ms
– Reflected light channel: 20 ms
– Fluorescence channel: 300 ms
§ Sets the Auto Exposure to Off
§ Sets the Intensity to the following:
– Transmitted light channel: 100%
– Reflected light channel: 100%
– Fluorescence channel: 60%
Default Sets the parameters for Exposure Time, Auto Exposure, Intensity
and EM Gain to the default values.
Parameter Description
Best Signal This proposal results in the best signal strength.
Best Compromise This proposal results in the best compromise between signal strength
and fastest acquisition.
Parameter Description
Show Excitation Activated: Shows the excitation spectrum of the selected dyes in the
graphical display.
Parameter Description
Show Emission Activated: Shows the emission spectrum of the selected dyes in the
graphical display.
Cancel Ends Smart Setup. The suggestions are not adopted into the experi-
ment.
2 Detector Selection
Here you can select the detector. If your system is equipped with an Airyscan detector,
you can use Smart Setup to generate proposals for Airyscan instead of confocal acquisi-
tion, see also Airyscan Mode.
3 Experiment Configuration
Here you can add up to four reflected light fluorescence channels and transmitted light
contrast techniques to your experiment. The added dyes or the contrast technique are
shown in the list below.
A click on opens the Add Dye or Contrast Method Dialog [} 869], where you
can select the desired dye or contrast technique from the Dye Database.
4 Motif Buttons
Here you can optimize image acquisition regarding particular requirements like speed or
quality. All parameters, e.g. camera resolution or dynamic range in the Acquisition
Mode or the Channels tool, are set automatically.
The automatic settings influence parameters like Frame Size, Speed, Direction, Bit Depth
(in Acquisition Mode tool) and Pinhole Diameter, Gain, Laser Power (in Channels tool),
depending on the selected button.
For more information, see Motif Buttons (LSM) [} 867]. Various proposals for further ex-
periment settings are shown in the graphical display below the buttons.
Parameter Description
Current No changes are made. Only the necessary hardware settings for ac-
quisition are applied by Smart Setup.
NOTICE! If you changed hardware settings in the Acquisition
Mode tool manually and do not want to lose them, make sure
you select the Current button.
Signal § Aims to provide high quality images with best signal to noise ratio
§ Sets the frame size to a minimal value that fulfills the Nyquist crite-
rion, but to a maximum of 2048x2048 pixel
§ Sets the scanning speed to 6
§ Sets the scanning direction to uni-directional
§ Sets the Bit Depth to 16 bit
Parameter Description
Fastest This proposal results in the fastest acquisition.
Best Signal This proposal results in the best signal strength and minimizes the
level of cross talk.
Smartest (Line) Combines the advantages of Fastest and Best Signal. It minimizes
the number of tracks as well as cross talk.
Parameter Description
Show Emission Shows the emission spectrum of the selected dyes in the graphical
display.
Show Excitation Shows the excitation spectrum of the selected dyes in the graphical
display.
Sample Navigator Opens the Sample Navigator, see Sample navigator LSM 980 and LSM
900.
Cancel Ends Smart Setup. The suggestions are not adopted into the experi-
ment.
Info
The bars in the graphs only show relative values. The actual strength of the emission signal
and the crosstalk in the image can deviate substantially from this estimate, as Smart Setup has
no knowledge of the strength with which the sample has been dyed with the individual dye
components.
1 Emission Signal
A filled, colored bar in the Emission Signal display field shows the relative emission sig-
nal to be expected for the corresponding channel. The channel color corresponds to the
color of the selected dye in the Configure Experiment section.
2 Speed
A gray bar in the Speed display field represents the approximate acquisition speed that
can be expected. This is the time required for the movement of microscope hardware
during multichannel acquisition. Camera exposure times or parameters for other acquisi-
tion dimensions are not taken into account here.
3 Crosstalk
A hatched bar in the Crosstalk display field shows the expected relative crosstalk origi-
nating from one or more dyes for other channels.
4 Tracks display
Only visible if the Show Excitation and/or Show Emission checkboxes are activated.
The various tracks are labeled with T1, T2 etc. The white lines show the excitation and
emission spectra of the dyes schematically. The spectra are filled in color in the places
that will be acquired by the acquisition configuration suggested by Smart Setup. Trans-
mitted light channels are displayed as a white field.
Here you add dyes and contrast methods to your experiment. The dyes in the database contain
important information that is saved in the image document (e.g. spectral characteristics). This in-
formation can be used later during image processing (e.g. deconvolution).
Info
You can add additional dyes to the database with the Dye Editor under Tools > Dye Edi-
tor....
Parameter Description
Recently used Displays the six recently used dyes or contrast methods in a list. This
ensures that you have quick access to the dyes or contrast methods
that you use frequently.
Search Here you can enter the name or initial letters of the dye or contrasting
method that you want to search for. The search results are displayed
immediately in the Dye Database list or the Contrast Methods list.
If no search filter is active, the lists of dyes or contrast techniques are
arranged in alphabetical order. If you cannot find a certain dye, try us-
ing a related dye name or a general name.
Dye Database Selects fluorescent dyes here. A double-click (or the Add button) adds
it to the experiment. The left column shows the name of the dye. The
right column contains its color and main emission wavelength. The
Custom entry adds a channel to your experiment without any addi-
tional information. This means that the resulting image cannot be
used for certain processing operations.
Add Adds the selected dye to the experiment. You can add several dyes/
contrast methods in a row.
9.2.2.4 Reuse
The Reuse functionality is only available if you have loaded an image in *.CZI image format. Then,
the Reuse button will then appear on the Acquisition tab. Otherwise, the Reuse button is not
active.
With this function you can apply the experiment setup of the acquired image to the current exper-
iment. This will help you to easily reproduce the acquisition conditions for the next image. The
function only works correctly if the system configuration at the time of acquisition is identical to
the system configuration at the time when you execute the function.
Removing components (e.g. filter cubes, LEDs, cameras, etc.) can result in an experiment being
created incorrectly. It is therefore essential that you check after executing the Reuse function
whether the configuration of the experiment is in line with your expectations.
Using the Reuse function for a Z-stack prompts a confirmation asking whether to place the Z-
stack at the current focus position or the original focus position of the acquired image.
Note, that the original position may be way off the current position and starting an experiment
right away can lead even lead to the destruction of your sample.
Info
Clicking on the Reuse button overwrites the current experiment without a prompt and marks
it as having been modified. This can be seen from the appearance of an asterisk after the file
name. If you want to keep the experiment in its previous form, you must save the modified ex-
periment with a new file name under Experiment Manager > Options > Save As.
If you acquire images and save them in *.CZI image format, the following acquisition conditions
are saved together with the image:
The available options in the dropdown list depend on the selected acquisition dimensions.
Here you apply processing functions to acquired or loaded images. For general information of the
image processing, see Image Processing Workflow [} 72]. For the detailed descriptions of the
functions and the processing workflow please read more under Image Processing Functions.
1 Function
Using Single processing, you can apply a selected processing method, with the relevant
method and image parameters, to a single image.
Using Batch processing, you can apply a selected processing method, with the relevant
method and image parameters, to a list (batch) of images. In this mode only a limited se-
lection of processing functions is available. For more information, see Applying Batch
Processing [} 75].
With the Apply button at the top you apply the selected method to the input image.
2 Method
Here you select the image processing functions. Open the Method tool to show the list
of IP functions.
3 Method Parameters
Here you configure the parameters of the selected image processing function. Click on
the Parameters tool to show the parameters of the selected IP function.
4 Image Parameters
Here you configure the image parameters of the input and output image. Click on the In-
put tool or Output tool to open input/output image settings. For more information, see
Input Tool [} 971] and Output Tool [} 984].
Depending on your licenses, you find different tools available for image analysis on this tab.
See also
2 Interactive Measurement Tool [} 973]
2 Image Analysis Tool [} 940]
Depending on your licenses, you have the following tools available on this tab:
Our extensions concept allows to extend ZEN basic functionality by implementing third party ex-
tensions, e.g. ImageJ. The extensions concept is a part of OAD (Open Application Develop-
ment) for ZEN (see also Open Application Development (OAD)).
Depending on which extension you have activated, you will see the extension's functions and
controls on the Extensions tab. Please notice that we will not describe any functions of third
party extensions here. Therefore use the third party documentation for each extension.
You can find more information on OAD and the supported extensions under www.zeiss.com/
zen-oad.
For more information on ImageJ, see ImageJ [} 282] .
9.3 Tools
In the Acquisition Mode tool you can set the various acquisition parameters that you want to
apply for the entire experiment.
Info
If you have created an experiment using the Experiment Designer tool, the settings in the
Acquisition Mode tool only apply to the relevant experiment block and may differ in the next
block.
In terms of content and appearance, the Acquisition mode tool is largely dependent on which
imaging mode was chosen in the Imaging Setup tool, either LSM tracks or widefield channels.
If you have configured LSM tracks, including e.g. Airyscan or Lambda Tracks, read the chapter
Parameters for LSM Imaging Modes [} 875].
If you have configured widefield channels, read the chapter Parameters for Widefield Mode
[} 880].
Adjust scanning and acquisition parameters that you want to apply for the entire experiment.
In case the experiment is run as a multiblock experiment, the settings apply to one experiment
block only.
Note that the available controls vary with the chosen track mode for LSM:
§ LSM confocal
§ Airyscan SR
§ Airyscan MPLX
§ NDD
§ LSM Lambda
§ LSM Online Fingerprinting
1
2
9
10
- Frame Activates the frame scan mode. If this mode is selected you will
see the representation of the scanning frame in the Scan Area
section.
- Line Activates the line scan mode. If this mode is selected you will see
the representation of the scanning line in the Scan Area section.
A line scan must be 128 pixels or more. Lower values are set
- Spot Activates the spot scan mode. If this mode is selected (only avail-
able for LSM confocal mode) the scanner is stationary at a spot
and the signal intensity is acquired from this one position.
2 Crop Area Clicking the button projects a rectangular overlay into the center
of the displayed image document representing the area of 2 x
zoom. The overlay graphics can be adjusted to the sample region
to be imaged in any rectangular rotated or non-rotated from
within the boundaries of the scan field. The subsequent image
acquisition will then be confined to this area.
Note that with a tile image the function will fail if the current
stage position is not in the center position of the tile region be-
cause Crop Area is confined to the Scan Area and does not
move the stage.
- Zoom Adjust the zoom level from 0.45x (0.6 for LSM 980 on Axio Ob-
server and Axio Imager; 0.7 for LSM 980 on Axio Examiner) -
40x by using the Zoom slider.
You can also enter a specific value in the input field. If clicking on
the 1 button behind the input field the zoom level will be reset
to default (1,0x) for confocal acquisition, 1.7 for Airyscan SR
tracks on LSM 980; and 1.3x for Airyscan tracks using LSM 900.
3 Scan Area In this expandable section, you can adjust the position of the
scan area.
The outer frame corresponds to the field of view of the micro-
scope.
The inner frame represents the scan area. All changes of Offset
and Rotation made in this section will be immediately applied to
the scan area.
Following functions are available:
- Rotation Adjust the rotation degree by using the Rotation slider. You can
also enter a specific value in the input field. If clicking on the O
button behind the input field the rotation degree will be reset to
default position (zero degree).
- Reset Scan Resets all adjustment of Rotation, Offset and Zoom to the sys-
Area tem defaults.
4 Frame Size/ Adjust the frame size (in pixel) of the displayed image by entering
Sampling the desired value in the two input fields.
- Presets but- By clicking on this button you can select from a list of default
ton frame sizes (e.g. 128 x 128 or 512 x 512). We recommend start-
ing with 512 x 512 px.
- Confocal By clicking on this button the frame size (image resolution) will
button be set to an optimal value corresponding to the optical magnifi-
cation (objective), the zoom factor and the wavelengths included
in the experiment. This provides an image where no spatial infor-
mation is lost and no empty data is generated as optimal sam-
pling is achieved. The confocal value is calculated for the given
objective and magnification settings matching a 1 fold sampling
according to 1 time Nyquist. Rectangular image dimensions are
preserved.
When you press this button once, it will change its color to blue.
This indicates that the optimal sampling will be maintained as
you continue to change acquisition parameters like zoom, Laser
lines or add more tracks. This active state of the button will be
lost once you manually edit the frame size or click the button
again.
- SR button Clicking this button (for Airyscan SR mode only) sets the sam-
pling to 2x Nyquist for superresolution mode.
5 Scan Speed Set the scan speed by adjusting the slider from 1 (slow) to 19 (for
LSM 900 the limit is 16) (very fast). The corresponding values for
Frame Time and Pixel Time will be displayed above the slider.
Note that the available maximum scan speed depends on the se-
lected Frame Size and zoom factor. In Airyscan MPLX mode the
speed is indicated in frames per second, an estimate of pixel time
and pixel number (without scanner return time). The actual frame
rate might be lower especially with very small frames. Maximum
speed is available with a zoom factor of 13.2 (LSM 980) or 6.5x
(LSM 900).
By clicking on the Max button the maximal possible scan speed
will be set automatically.
- Unidirec- The laser scans in one direction only, then moves back with
tional beam blanked and scans the next line.
- Bi-direc- The laser also scans when moving backwards, i.e. the scan time
tional is reduced by about a factor of two.
Note that the pixel shift between forward and backward move-
ment (double image) resulting from bi-directional scanning must
be corrected. To do that use the Correction X/Correction Y
sliders.
By clicking on the Auto button an automatic scan correction will
be performed.
For optimal results this correction should be repeated every time
scan parameters like rotation, frame size, zoom or speed are
changed.
8 Averaging
- Number Select the number of images you want to average (2x - 16x).
10 HDR Illumina- This parameter is only available if you have licensed the HDR
tion Confocal Basic module.
If activated, a HDR effect will be applied to the image. This effect
will boost weak structures without saturating bright areas in the
image and enable an optimal representation of the morphology
of weak and bright objects within the same image.
To achieve this, the image will be scanned three times with in-
creasing the excitation intensity. Areas in the image, which dis-
played overexposure will be excluded in the following scans in
order to avoid photobleaching. It is recommended to use 16bit
for the acquisition of HDR datasets.
Note: HDR imaging is not possible when using an Experiment Re-
gion for Acquisition (see Experiment Regions Tool); whatever is
activated first (HDR or Experiment Region for Acquisition) will
then not allow to activate the other function.
HDR was developed based on ideas and a concept of O. Ron-
neberger and R. Nitschke (Albert-Ludwigs-University Freiburg,
Department of Computer Science and Life Imaging Center at
ZBSA).
See also
2 Experiment Regions Tool [} 926]
In this section you can adopt camera settings from the active camera to your experiment and ad-
just basic camera settings.
Parameter Description
Get Settings from
Active Camera
- Get Applies the settings from the active camera to your experiment.
Parameter Description
Binning is not available for Axiocam 705 pol.
Resolution Displays and selects the camera resolution, e.g. 1024 x 1024 px.
IP Quality Here you can select the color interpolation quality (IP Quality) for the
acquired image. Note that this function does not apply to Live mode.
Fast: color interpolation for optimum speed (shorter computation).
High: color interpolation for optimum quality (less artifacts). This
mode is only effective with binning factor 1.
Subsampling Here you can reduce the amount of data acquired to achieve faster
frame rates. By subsampling 2x2, the effective pixel pitch is increased
by sampling only every other pixel, thus reducing the overall data size
of your image.
The CMOS sensors in the Axiocam 705 and 712 with the exception
of Axiocam 705 pol offer a on-chip subsampling mode. This sub-
sampling mode enables a dramatic increase in frame rate especially
for time series acquisition (at short exposure times) at an full field of
view of the camera.
In addition, the amount of produced image data is decreased. As this
is done by skipping every second pixel in x and y direction, the optical
resolution is decreased accordingly. Especially in high magnification
objectives at lower NA values, the image resolution is optically limited.
So this function can be used to minimize empty image information.
In this section you can define a Region Of Interest (ROI) on the camera sensor which will be
used for acquisition. A smaller ROI can increase the acquisition speed.
The region of interest is indicated by a blue frame in the preview window and can be moved and
resized freely. The preview window always shows the entire camera sensor area which can be ac-
quired.
The Pixel Size shown below the preview window indicates the size in µm to which a pixel corre-
sponds. This depends on the camera sensor properties and on the binning.
Parameter Description
Maximize Selects the entire available image sensor area as the region of interest.
Parameter Description
Center Positions the region of interest precisely at the center of the image.
Size Sets the width and height of the region of interest in pixels.
Offset Specifies the position of the top left corner of the Acquisition ROI
(blue frame) with respect to the top left corner of the preview win-
dow.
Refresh Overview Acquires and displays an image in the preview window with the cur-
rent ROI settings. This has no effect on the image in the Center
Screen Area.
In this section you can set three different modes for acquisition.
Parameter Description
Interactive Using this mode you can intervene manually at certain points during
acquisition. The acquisition is comparatively slow.
Here you can apply basic image processing functions while acquiring the image. This can be help-
ful if certain image processing steps are necessary for any acquired image and saves image pro-
cessing work later in a job.
Depending on the camera model, different settings are available.
Parameter Description
Black Reference Influences the live image and each image acquired. For the black ref-
erence to work, you first need to acquire a completely dark reference
image. Therefore, the light path to the camera must be blocked by
setting the eyepiece switch to 100% eyepiece and closing the re-
flected light /transmitted light shutter. Define a corresponding refer-
ence image using the Define button.
Activated: Applies the measured black reference to the image.
Deactivated: The measured black reference is not used. The refer-
ence image is retained.
Parameter Description
If longer exposure times are used (from exposure times of approx.
>5s, depending on the camera), individual bright pixels may become
visible with CCD or CMOS sensors. With the help of the black refer-
ence these effects are measured and corrected in accordance with the
exposure time employed.
It is recommended that you repeat this measurement at certain inter-
vals. This correction is recommended in particular for applications that
involve long exposure times, i.e. for which very little light is used (live
cell imaging, fluorescence images).
The availability of a black reference for the selected camera can be
checked on the menu Tools > Calibration Manager.
- Define Automatically defines the black reference. The measurement lasts for
several seconds. The Black Reference checkbox is then activated au-
tomatically.
Shading Correction Shading correction is used to correct optical effects, such as minor dif-
ferences in illumination or static contaminants in the beam path, with
the help of a reference image. The reference image must be acquired
without a sample. You can select between two modes Global and
Specific from the dropdown list (see description below).
After you have selected the mode simply click on the Define button.
Activated: Applies the defined shading correction to the image. The
applied correction mode is multiplicative.
Deactivated: The measured shading correction is not used. The refer-
ence image is retained.
An empty image without structures at a medium illumination intensity
is required for the shading correction measurement. To create this im-
age, locate an empty position on the slide outside the sample and ac-
quire an image for shading correction there. There must be no visible
structures on the slide as these will be incorporated into the correc-
tion image and could then lead to a visible artifact at other positions.
It may be necessary to clean the slide and defocus the microscope
slightly. You should bear in mind that Köhler illumination needs to be
set correctly. No part of the image must be overexposed.
Parameter Description
tion checkbox is automatically deactivated when the objective in
question is swung in. Objective recognition on a motorized or en-
coded microscope is required for these automatic actions.
Enable Noise Filter Activated: Filters the noise in the acquired image according to the
adjusted threshold. Affects acquired images only. The live image does
not change.
Deals with sporadic bright events in single pixels on CMOS sensors,
like blinking pixels in dark images or spurious cosmic ray events. This
feature is most useful for situations with high gain settings for acquisi-
tion of very dim signals with CMOS or EMCCD cameras. In combina-
tion with a correct black reference an absolutely flat signal back-
ground can be produced.
- Threshold The higher the value, the greater the tolerance for noise. The lower
the value, the stronger the noise reduction.
The noise filter reduces the extent to which individual pixels deviate
from the average value of their nearest neighbors. The Threshold
corresponds to a tolerance value. If the deviation of the middle pixel
value from the average value of the pixels immediately surrounding it
exceeds the tolerance value (i.e. it is interpreted as noise), it is re-
placed by the average value.
This technique reduces the noise of individual pixels that are pro-
duced, in particular with EMCCD cameras and CMOS cameras. The se-
lected technique prevents any changes being made to object edges,
as in most cases these are larger than individual pixels.
This filter is also suitable for removing individual "hot pixels" from an
image without having to acquire a reference image in advance.
Enable Unsharp Enhances contrasts at fine structures and edges. Thus, the resulting
Mask image appears clearer and enriched in detail.
- Color Mode Determines the calculation method, which affects the appearance of
the output image.
Parameter Description
§ RGB:
– The Unsharp Mask filter calculates the sharpness for each color
channel individually.
– The color saturation and the color of structures may be changed
and color noise may occur.
§ Luminance:
– The Unsharp Mask filter calculates the sharpness based on the
luminance signal computed from the RGB channels.
– This mode avoids possible color noise or shift in color saturation,
which could be induced by certain image textures.
- Auto Contrast Activated: Enables you to adjust the Contrast Tolerance (0-20).
Auto Contrast only works in RGB color mode.
- Contrast Tol- Increasing the contrast during unsharp masking is achieved by broad-
erance ening the distribution of intensities. This corresponds to a spread of
the image histogram.
Controls how much the intensity distribution is spread and thus how
strong the contrast is increased.
§ Contrast Tolerance = 0 : No spread of intensities, no increase of
contrast
§ Contrast Tolerance = 20: Maximum spread of intensities, maxi-
mum increase of contrast
- Clip To Valid Activated: Composes the processed image of the same colors as the
Bits original image (i.e. the value range of the output image is adjusted to
the color range of the input image).
Deactivated: Colors not present in the original image may appear in
the processed image.
See also
2 Shading Correction [} 87]
In this section you can adjust how the software retrieves the camera sensor data.
Parameter Description
Color Mode This parameter is only available for color cameras and polarization-
sensitive cameras.
– RGB Transmits the image data of a color camera unchanged. This corre-
sponds to the standard operating mode of a color camera.
For Axiocam 705 pol, captures four-channel images with one
pseudo-color image for angle and degree of polarization and bright-
ness information. For details, see Color Mode for Axiocam 705 pol
[} 908].
– B/W Treats the image data of the color channels as grayscale. The data of
related color channels are averaged. The saturation of the camera ap-
pears reduced as a result.
Parameter Description
This process does not change the spectral properties of a color cam-
era. The image information of the color sensor still undergoes color
interpolation. An infrared filter also restricts the spectral sensitivity of
the color camera compared to the spectral sensitivity of a genuine
black and white camera.
For Axiocam 705 pol, captures multi-channel monochrome images
with the basic polarization directions of the sensor polarization filter
directions. For details, see Color Mode for Axiocam 705 pol [} 908].
Live Speed Here you can select the live image update speed.
For CCD based camera models = slow, medium, fast.
For CMOS based camera models = high res, fast, low light.
Enables you to focus or to find regions of interest on a sample
quickly. A high live image update speed reduces the exposure time of
the live image, even at longer exposure times used for image acquisi-
tion.
To achieve a similar impression of image brightness, however, the im-
age data supplied must be adjusted digitally, which may generate a
certain amount of noise or reduce the resolution of the live image.
– High Represents the image without artifacts and with a higher image qual-
ity. This mode is only effective with binning factor 1.
Bit Depth Enables the reduction of the delivered camera bit depth. It is translat-
ing the 14 bit based camera data values into a smaller value of 12 bit
or 8 bit. This has a visible effect of the number range in the image his-
togram, the dynamic range and the corresponding image file size in
an uncompressed CZI image.
As default, the camera models deliver 14 bit per pixel (Axiocam
503/506/512/702) and 12 bit/pixel in case of the Axiocam 305. 14
bit and 12 bit per pixel values need to be stored in two bytes in an CZI
image file. In case of a translation to 8 bit per pixel, only one byte is
needed. Therefore it is possible to reduce the produced image file size
by a factor of two. This is especially beneficial, if large amount of im-
age data is acquired.
By reducing the native camera bit depth to smaller values, the avail-
able number range for the digital image signal is decreased. There-
fore, the available amount of resolvable intensities in one image scene
is decreased accordingly.
– LUT min/LUT LUT stands for Look Up Table. It describes the translation method for
max quickly translating digital numbers in a different range. If the full
(only available swing of the input signal is not used, the reduction of bit depth can
for 8 bit be adopted accordingly by the translation starting point "Lut min" and
mode) the translation end point "Lut max".
If the used intensity range equals just an 8 bit value range, no infor-
mation is lost and the unused bits can be excluded from being stored
by an suitably adjusted translation table.
Available range is 0 to 1. The value 0 equals 0% of the input range.
The number 1 equals 100% of the input range.
Parameter Description
– Gamma The 14 bit to 8 bit translation is linear by default, which equals a
(only available Gamma value of 1. By assigning values larger or smaller then 1, the
for 8 bit translation becomes non linear.
mode)
Values <1 selectively reduce dim signal intensities.
Values >1 selectively amplify dim signal intensities.
Reset button
Resets all entries to the original values.
See also
2 AxioCam ICc5 [} 913]
2 AxioCam MR [} 912]
2 AxioCam ERc5s [} 914]
2 AxioCam HR [} 911]
2 Axiocam 503/506/512 [} 904]
2 Axiocam 105 [} 913]
2 Axiocam 305/702/705/712 [} 906]
Parameter Description
Center Displays the options for center mode.
– Center Centers the stage at the current position. Otherwise, the stage will do
the scan around its last position.
– Auto-Center Activated: Automatically centers the system around the current posi-
tion of the stage.
– Set First Sets the current frame as the first image of the stack.
– Set Last Sets the current frame as the last image of the stack.
Parameter Description
Interval Defines the step size. The default interval is 0.2 μm for Nyquist-sam-
pled imaging. This should only be changed to coarser values if experi-
ment/data rate/speed allows for.
Light Sheet Selects the size of the light sheet from a list of default light sheet sizes
(e.g. 15 x 550 with 15 µm in length and 550 nm thick).
Size Adjusts the region of interest (ROI) (in μm or px) by entering the de-
sired value in the two input fields.
– Crop Area Displays a rectangle in the image to define the cropped area.
– Optimal The light sheet size (ROI) is set to an optimal value corresponding to
the optical magnification (objective), the zoom factor and the wave-
lengths included in the experiment. This provides an image where no
spatial information is lost and no empty data is generated as optimal
sampling is achieved.
Parameter Description
Number of Phases Selects the number of phase images for Lattice SIM (13 or 9) or SIM
Apotome (5 or 3).
Parameter Description
Live Processing Triggers on the fly SMLM Processing with default parameters.
Parameter Description
Available Models Displays a list of available models.
– Create Analy- Opens a file browser to create an image analysis setting from the cur-
sis Setting rently selected model.
Open AI Model Opens the model store dialog which allows you to manage all your
Store available AI models.
See also
With this dialog you can manage all AI models that are available to you, including downloading
new models from arivis Cloud.
Parameter Description
Available in ZEN Displays all AI models that are currently available in ZEN.
Download from Updates the model list to display all models that are available for
arivis Cloud download to you on arivis Cloud.
Model List Displays a list of available models as a table. Depending on the selec-
tion on the left, all models available locally in ZEN or all models avail-
able on arivis Cloud are displayed. The table can be sorted based on
the different information by clicking at the respective header. An ar-
row indicates whether the entries are sorted in ascending or descend-
ing order.
Properties Displays further information about the currently selected model, e.g.
the number of channels or the pixel type.
– Download Only visible if you have selected Download from arivis Cloud.
Models Starts the download of the currently selected model.
See also
2 Downloading AI Models [} 71]
2 Importing AI Models [} 72]
2 Creating an Image Analysis Setting From an AI Model [} 429]
Parameter Description
Enable ApoTome Activated: Uses the ApoTome for acquisition and experiments.
Phase Images Select here the number of phase images per optical sectioning. 5
phase images are the default value.
Live Mode Here you set the display of the Live Mode. Default value is Grid Visi-
ble.
Info
The Auto Save tool is not visible, if you have activated the Panorama checkbox in the experi-
ment manager. When you execute an experiment or click Snap, the Auto Save tool is dis-
abled until the experiment is cancelled or finished. The tool also stays disabled when the ex-
periment is paused. This behavior should prevent operation errors during experiments.
Generally, all images are automatically written to the hard disk during acquisition to prevent data
loss in case of technical problems. The folder path for these files is displayed in the status bar un-
der Storage Folder. In case you want to store the files directly in a location of your choice, acti-
vate the Auto Save checkbox. In this case the files are not written to the temporary folder any-
more, but automatically saved to the path defined in this tool.
Parameter Description
Saving Location
Dropdown
– Store Locally Stores the images on you local PC in the directory you define with
Folder.
– Store in ZEN Only available if you have ZEN Data Storage set up in ZEN and defined
Data Storage a default transfer share in the server management (or during the in-
Server stallation) of the server. Both the server and ZEN need to have access
to this folder. Additionally, this option is not available if you have a
ZEN Connect project open.
Directly uploads the images to ZEN Data Storage after the acquisition
is finished and the images are closed in ZEN.
Add to Collection Only visible if you have selected Store in ZEN Data Storage Server.
Selects the collection with which the images are automatically shared
after the upload. If no collection is selected or available, the images
are uploaded to the ZEN Data Storage but not shared with a collec-
tion.
A click on open the file browser to select a new folder for the au-
tomatically saved images.
Automatic Sub- Only available if you have selected Store Locally and no ZEN Connect
Folder project is open.
Activated: Automatically creates a top level sub folder in the given
directory. The sub folder name is based on the current date, e. g.
2014-07-04.
Format Displays and defines the Format for the naming convention.
Parameter Description
Initial Counter Defines the initial value for the counter.
Value
Preview Displays a preview of the image naming based on the settings and the
Format.
Add Format IDs Displays a list of all available parameters that can be used to create a
naming convention (displayed in the Format field above). A double
click on an entry adds the parameter to the Format field.
Close CZI after ac- Activated: Closes the CZI image in the center screen area when the
quisition experiment is finished.
File Name Preview Shows the currently selected storage path as well as a preview of the
file name being used next.
See also
2 Uploading Images Automatically After Acquisition [} 272]
2 General Information Image Saving [} 891]
All images in ZEN are automatically written to the hard disk during acquisition to prevent data loss
in case of technical problems. The folder path for these files is displayed in the status bar under
Storage Folder. The location can easily be opened by double-clicking on this field. The path can
be changed in the Tools > Options > Saving.
The automatically saved images are contained in the subfolder temp within the currently chosen
image storage path (default path is: C:\Users\<username>\Pictures\temp). When saving such tem-
porarily saved images via the File > Save, you are asked to specify a document name and a stor-
age location. If you close such an image document without saving, it will be permanently deleted
from the temp folder in order to prevent the accumulation of unnecessary images.
Even though these files are stored physically on the disk, they are displayed with an asterisk and
you are prompted to either rename and store them in a different place, or to delete them. They
are maintained only if the software crashes. In case you want to store the files directly in a loca-
tion of your choice, activate the Auto Save checkbox. In this case the files are not written to the
temporary folder anymore.
If the Auto Save checkbox in the Experiment Manager is activated, all images which are ac-
quired through the Acquisition tab, are automatically stored in the defined folder during acquisi-
tion.
Info
It is not possible to automatically export images created by the Snap function. If you require
to use the Auto Export feature for individual images, you must create a Time Series experiment
with a single cycle.
If the Automated Export checkbox in the Experiment Manager is activated before an experi-
ment is executed, the generated images will be stored in the defined directory with the given pa-
rameters and options, provided by the options under the checkbox. This option was developed for
automatically exporting images with a user defined format (TIFF or JPEG).
Info
When you execute an experiment, the tool is disabled until the experiment is cancelled or fin-
ished. The tool also stays disabled when the experiment is paused. This behavior should pre-
vent operation errors during experiments.
For technical reasons images acquired from the Acquisition tab are always auto-saved temporarily
as CZI files. If the application requires images to be stored in external common file formats, it is
necessary to run the export function. The Automatic Image Export facilitates this in a convenient
und automatic way giving the choice of single page TIFF or JPEG file formats. It is also possible to
automatically close and discard the auto-saved CZI file to streamline the acquisition workflow.
Parameter Description
Folder Shows the directory for the images. The text box is read-only. No
values can be entered or pasted by the user. This ensures that
the text box contains always a valid directory
Automatic Sub-Folder Activated: Creates automatically a top level sub folder in the
given directory. The sub folder name is based on the actual date,
e. g. 2014-07-04.
Prefix Here you can define a prefix for the image file name and a name
for the sub folder. If the text box is empty an image gets a local-
ized default prefix (“Untitled”) and a folder gets a localized de-
fault name (“New folder”). If an image or folder with a name still
exists, the new image or folder gets the same name with an in-
creasing index in accordance to the standard Windows Explorer
behavior, e. g. New Folder (1), New Folder (2).
Format Here you select the format for the export images. Two formats
are supported:
§ TIFF: For the TIFF format (lossless, bigger file size) you can ad-
ditionally select the Compression method.
– None
– LZW
– ZIP
Parameter Description
§ JPEG: For the JPEG format (lossy, smaller file size) you can set
the Quality level by adjusting the slider between Low (lower
quality, smaller images) to High (higher quality, bigger file
size).
Apply display curve Activated: Applies the display curve and channel color to the
and channel color JPEG or TIFF image
Use channel names Activated: The name of the resulting image contains the name
of the defined channel.
Add XML Metadata Activated: Saves an additional xml file with image meta data. Its
name has the following nomenclature:
Prefix_Metadata (image format).xml → Test_Metadata(tif).xml If
more than one xml file with the same name exists the file gets an
index, e.g. Test-02_Metadata(tif).xml.
Close CZI image after Activated: Closes the CZI image in the center screen area when
acquisition the experiment is finished.
NOTICE! If Auto Save is not activated, this will lead to the
loss of the original .czi file for the experiment.
Dimension/Sub-direc- If you check one of the Channels, Time Series, Z-Stack or Scenes
tory checkboxes, an additional sub-directory will be created if the cor-
responding dimension exists in the experiment block.
The sub-directory will be created in the same image dimension
order as the CZI image created, e.g. T-C-Z. The top level folder
within the “Dimension” folders is always the B (“Block”) folder
[new].
Each sub-directory gets a letter that represents its image dimen-
sion (T for time series, C for channel, etc.) and an index, if more
than one dimension of the same type exists (T=0, T=1).
With this tool you can select OAD macros which will be executed before or/and after an experi-
ment. The default folder for macros is User/Documents/Carl Zeiss/ZEN/Documents/Macros.
Any other folder is selectable.
Parameter Description
Run OAD macro If activated, the selected macro is executed before the experiment
before experiment starts.
execution
If you click on the ... button you can select a macro from the file sys-
tem.
Run OAD macro If activated, the selected macro is executed after the experiment.
after experiment
If you click on the ... button you can select a macro from the file sys-
execution
tem.
Here you can configure all the settings for the selected camera.
Note that the functions and settings in this tool depend on which camera you are using. Not all
cameras have all the functions described in here.
Parameter Description
Time Specifies the duration of the image acquisition. Selects the unit of
time (min, ms, s, µs).
Auto Exposure Activated: The exposure time is calculated automatically every time
an image is acquired. The exposure time in the corresponding input
field fluctuates accordingly.
Deactivated: The exposure time must be set manually.
Set Exposure Starts a one-off measurement of the exposure time, which is then
used for all subsequent images. Deactivates the Auto Exposure
checkbox.
If you are not satisfied with the result, you can adjust the measured
exposure time manually.
Spot Meter/Focus Activated: The exposure time and focus measurements use the inten-
ROI sity values within a specified area instead of the entire camera sensor
area. This improves the results for the area to be acquired.
If the red Spot Meter/Focus ROI frame is not visible in the live im-
age, right-click in the live image and select Spot Meter/Focus ROI
from the context menu.
Parameter Description
Binning Here you can set the binning.
Binning combines the information of neighboring camera pixels into a
single larger pixel. For example, if the binning is set to 2 × 2, four pix-
els are combined to one.
Increasing the binning means weaker signals can be detected for a
given exposure time.
For CCD cameras, binning increases sensitivity by improving the sig-
nal-to-noise ratio, with resolution being decreased by the same factor.
In the case of CMOS cameras, only the signal intensity is increased
and the pixel count and resolution gets reduced correspondingly.
Binning is not available for Axiocam 705 pol.
Resolution Displays and selects the camera resolution, e.g. 1024 x 1024 px.
IP Quality Here you can select the color interpolation quality (IP Quality) for the
acquired image. Note that this function does not apply to Live mode.
Fast: color interpolation for optimum speed (shorter computation).
High: color interpolation for optimum quality (less artifacts). This
mode is only effective with binning factor 1.
Subsampling Here you can reduce the amount of data acquired to achieve faster
frame rates. By subsampling 2x2, the effective pixel pitch is increased
by sampling only every other pixel, thus reducing the overall data size
of your image.
The CMOS sensors in the Axiocam 705 and 712 with the exception
of Axiocam 705 pol offer a on-chip subsampling mode. This sub-
sampling mode enables a dramatic increase in frame rate especially
for time series acquisition (at short exposure times) at an full field of
view of the camera.
In addition, the amount of produced image data is decreased. As this
is done by skipping every second pixel in x and y direction, the optical
resolution is decreased accordingly. Especially in high magnification
objectives at lower NA values, the image resolution is optically limited.
So this function can be used to minimize empty image information.
Save suitable white balance settings using the Settings section to ensure color reproducibility of
images acquired in the future.
Parameter Description
Auto Compensates for the color temperature of the light source automati-
cally to yield a neutral hue.
The entire camera sensor area is measured. If there are no pure white
areas on the sample and Auto does not yield the desired results, mea-
sure and compensate for the color temperature of the light source as
follows:
§ Transmitted light: Move the sample such that a clear and transpar-
ent region is illuminated or remove the sample from the micro-
scope. Click the Auto button to perform the auto white balance.
§ Reflected light: Use a neutral surface (e.g. a piece of white paper)
as a sample. Click the Auto button to perform the auto white bal-
ance.
You can now acquire white balanced images of your sample with the
above settings.
Pick... Enables you to select a reference pixel for white balance from the live
image.
The selected pixel should be neutral white.
Show Channels Sets the color balance of each color channel (red/cyan, green/ma-
genta and blue/yellow) individually to make the image appear neutral.
Color Temperature Changes the overall color temperature of the image from cool (blue
cast) to warm (red cast).
The color channels (red/cyan, green/magenta and blue/yellow) are ad-
justed automatically. The Color Temperature setting can work
against the settings applied using Show Channels.
Use Color Temperature for fine tuning in combination with Pick... if
Pick... does not give perfect results.
Reset Resets any color changes and sets the white balance value to 6500 K.
In this section you can define a Region Of Interest (ROI) on the camera sensor which will be
used for acquisition. A smaller ROI can increase the acquisition speed.
The region of interest is indicated by a blue frame in the preview window and can be moved and
resized freely. The preview window always shows the entire camera sensor area which can be ac-
quired.
The Pixel Size shown below the preview window indicates the size in µm to which a pixel corre-
sponds. This depends on the camera sensor properties and on the binning.
Parameter Description
Maximize Selects the entire available image sensor area as the region of interest.
Center Positions the region of interest precisely at the center of the image.
Size Sets the width and height of the region of interest in pixels.
Offset Specifies the position of the top left corner of the Acquisition ROI
(blue frame) with respect to the top left corner of the preview win-
dow.
Refresh Overview Acquires and displays an image in the preview window with the cur-
rent ROI settings. This has no effect on the image in the Center
Screen Area.
Adjusting the gain factor amplifies the signal intensity and brightness of the camera image corre-
spondingly while at the same time reducing the available dynamic range.
For Axiocam 702, 705, and 712 models: The gain factor allows a signal amplification as speci-
fied in the camera data sheet. For these CMOS sensor based camera models, also the sensitivity of
the cameras is amplified and the cameras can detect weaker signals. Note that by amplifying a
signal, the max/min signal intensity which can be acquired will be limited. The consequence is a
reduction of the available dynamic range. Gain 4 (opt) is a best compromise between sensitivity
and available dynamic range. Gain 1x (min) is the minimum signal amplification with the highest
dynamic range. Gain 16x (max) is the highest sensitivity with the smallest dynamic range.
Here you can apply basic image processing functions while acquiring the image. This can be help-
ful if certain image processing steps are necessary for any acquired image and saves image pro-
cessing work later in a job.
Depending on the camera model, different settings are available.
Parameter Description
Black Reference Influences the live image and each image acquired. For the black ref-
erence to work, you first need to acquire a completely dark reference
image. Therefore, the light path to the camera must be blocked by
setting the eyepiece switch to 100% eyepiece and closing the re-
flected light /transmitted light shutter. Define a corresponding refer-
ence image using the Define button.
Activated: Applies the measured black reference to the image.
Deactivated: The measured black reference is not used. The refer-
ence image is retained.
If longer exposure times are used (from exposure times of approx.
>5s, depending on the camera), individual bright pixels may become
visible with CCD or CMOS sensors. With the help of the black refer-
ence these effects are measured and corrected in accordance with the
exposure time employed.
It is recommended that you repeat this measurement at certain inter-
vals. This correction is recommended in particular for applications that
involve long exposure times, i.e. for which very little light is used (live
cell imaging, fluorescence images).
The availability of a black reference for the selected camera can be
checked on the menu Tools > Calibration Manager.
- Define Automatically defines the black reference. The measurement lasts for
several seconds. The Black Reference checkbox is then activated au-
tomatically.
Shading Correction Shading correction is used to correct optical effects, such as minor dif-
ferences in illumination or static contaminants in the beam path, with
the help of a reference image. The reference image must be acquired
without a sample. You can select between two modes Global and
Specific from the dropdown list (see description below).
After you have selected the mode simply click on the Define button.
Activated: Applies the defined shading correction to the image. The
applied correction mode is multiplicative.
Deactivated: The measured shading correction is not used. The refer-
ence image is retained.
An empty image without structures at a medium illumination intensity
is required for the shading correction measurement. To create this im-
age, locate an empty position on the slide outside the sample and ac-
quire an image for shading correction there. There must be no visible
structures on the slide as these will be incorporated into the correc-
tion image and could then lead to a visible artifact at other positions.
It may be necessary to clean the slide and defocus the microscope
slightly. You should bear in mind that Köhler illumination needs to be
set correctly. No part of the image must be overexposed.
Parameter Description
Fluorescent filters or other fluorescence specific components are not
considered.
In principle, shading correction is objective specific. A separate refer-
ence image has to be created for each objective. Once calibration has
been completed, the correction image associated with the objective
being used is loaded automatically if shading correction is active. If no
correction image is available for an objective, the Shading Correc-
tion checkbox is automatically deactivated when the objective in
question is swung in. Objective recognition on a motorized or en-
coded microscope is required for these automatic actions.
Enable Noise Filter Activated: Filters the noise in the acquired image according to the
adjusted threshold. Affects acquired images only. The live image does
not change.
Deals with sporadic bright events in single pixels on CMOS sensors,
like blinking pixels in dark images or spurious cosmic ray events. This
feature is most useful for situations with high gain settings for acquisi-
tion of very dim signals with CMOS or EMCCD cameras. In combina-
tion with a correct black reference an absolutely flat signal back-
ground can be produced.
- Threshold The higher the value, the greater the tolerance for noise. The lower
the value, the stronger the noise reduction.
The noise filter reduces the extent to which individual pixels deviate
from the average value of their nearest neighbors. The Threshold
corresponds to a tolerance value. If the deviation of the middle pixel
value from the average value of the pixels immediately surrounding it
exceeds the tolerance value (i.e. it is interpreted as noise), it is re-
placed by the average value.
This technique reduces the noise of individual pixels that are pro-
duced, in particular with EMCCD cameras and CMOS cameras. The se-
lected technique prevents any changes being made to object edges,
as in most cases these are larger than individual pixels.
This filter is also suitable for removing individual "hot pixels" from an
image without having to acquire a reference image in advance.
Enable Unsharp Enhances contrasts at fine structures and edges. Thus, the resulting
Mask image appears clearer and enriched in detail.
Parameter Description
- Radius Determines the size of detail to be enhanced. A small radius enhances
smaller details. The radius also affects the appearance of enhanced
edges. A large radius leads to a visible halo along enhanced edges.
The larger the radius, the broader the halo.
- Color Mode Determines the calculation method, which affects the appearance of
the output image.
§ RGB:
– The Unsharp Mask filter calculates the sharpness for each color
channel individually.
– The color saturation and the color of structures may be changed
and color noise may occur.
§ Luminance:
– The Unsharp Mask filter calculates the sharpness based on the
luminance signal computed from the RGB channels.
– This mode avoids possible color noise or shift in color saturation,
which could be induced by certain image textures.
- Auto Contrast Activated: Enables you to adjust the Contrast Tolerance (0-20).
Auto Contrast only works in RGB color mode.
- Contrast Tol- Increasing the contrast during unsharp masking is achieved by broad-
erance ening the distribution of intensities. This corresponds to a spread of
the image histogram.
Controls how much the intensity distribution is spread and thus how
strong the contrast is increased.
§ Contrast Tolerance = 0 : No spread of intensities, no increase of
contrast
§ Contrast Tolerance = 20: Maximum spread of intensities, maxi-
mum increase of contrast
- Clip To Valid Activated: Composes the processed image of the same colors as the
Bits original image (i.e. the value range of the output image is adjusted to
the color range of the input image).
Deactivated: Colors not present in the original image may appear in
the processed image.
See also
2 Shading Correction [} 87]
In this section you can save all the settings you have made in the camera tool to a settings file
(*.czcs). This is very helpful because you can restore/load your saved settings very quickly when
starting the software again.
Parameter Description
Default Resets all camera settings in the Camera tool to the factory default
settings. These settings can also be selected from the dropdown list
of available camera settings to the right of the Default button. To do
this, select the Original Settings entry.
Reload Undoes the changes you have made to a loaded setting and restores
the original status of the loaded setting.
Parameter Description
New Creates a new camera setting. Enter a name for the camera setting in
the input field. To save the camera setting, click on the Disc icon to
the right of the input field.
Rename Renames the current camera setting. Enter another name for the cam-
era setting in the input field. To save the camera setting, click on the
Disc icon to the right of the input field.
Save As Saves the current camera setting under a new name. Enter a new
name in the input field. To save the camera setting, click on the Disc
icon to the right of the input field.
In this section you can adjust how the software retrieves the camera sensor data.
Parameter Description
Color Mode This parameter is only available for color cameras and polarization-
sensitive cameras.
– RGB Transmits the image data of a color camera unchanged. This corre-
sponds to the standard operating mode of a color camera.
For Axiocam 705 pol, captures four-channel images with one
pseudo-color image for angle and degree of polarization and bright-
ness information. For details, see Color Mode for Axiocam 705 pol
[} 908].
– B/W Treats the image data of the color channels as grayscale. The data of
related color channels are averaged. The saturation of the camera ap-
pears reduced as a result.
Parameter Description
This process does not change the spectral properties of a color cam-
era. The image information of the color sensor still undergoes color
interpolation. An infrared filter also restricts the spectral sensitivity of
the color camera compared to the spectral sensitivity of a genuine
black and white camera.
For Axiocam 705 pol, captures multi-channel monochrome images
with the basic polarization directions of the sensor polarization filter
directions. For details, see Color Mode for Axiocam 705 pol [} 908].
Live Speed Here you can select the live image update speed.
For CCD based camera models = slow, medium, fast.
For CMOS based camera models = high res, fast, low light.
Enables you to focus or to find regions of interest on a sample
quickly. A high live image update speed reduces the exposure time of
the live image, even at longer exposure times used for image acquisi-
tion.
To achieve a similar impression of image brightness, however, the im-
age data supplied must be adjusted digitally, which may generate a
certain amount of noise or reduce the resolution of the live image.
– High Represents the image without artifacts and with a higher image qual-
ity. This mode is only effective with binning factor 1.
Bit Depth Enables the reduction of the delivered camera bit depth. It is translat-
ing the 14 bit based camera data values into a smaller value of 12 bit
or 8 bit. This has a visible effect of the number range in the image his-
togram, the dynamic range and the corresponding image file size in
an uncompressed CZI image.
As default, the camera models deliver 14 bit per pixel (Axiocam
503/506/512/702) and 12 bit/pixel in case of the Axiocam 305. 14
bit and 12 bit per pixel values need to be stored in two bytes in an CZI
image file. In case of a translation to 8 bit per pixel, only one byte is
needed. Therefore it is possible to reduce the produced image file size
by a factor of two. This is especially beneficial, if large amount of im-
age data is acquired.
By reducing the native camera bit depth to smaller values, the avail-
able number range for the digital image signal is decreased. There-
fore, the available amount of resolvable intensities in one image scene
is decreased accordingly.
– LUT min/LUT LUT stands for Look Up Table. It describes the translation method for
max quickly translating digital numbers in a different range. If the full
(only available swing of the input signal is not used, the reduction of bit depth can
for 8 bit be adopted accordingly by the translation starting point "Lut min" and
mode) the translation end point "Lut max".
If the used intensity range equals just an 8 bit value range, no infor-
mation is lost and the unused bits can be excluded from being stored
by an suitably adjusted translation table.
Available range is 0 to 1. The value 0 equals 0% of the input range.
The number 1 equals 100% of the input range.
Parameter Description
– Gamma The 14 bit to 8 bit translation is linear by default, which equals a
(only available Gamma value of 1. By assigning values larger or smaller then 1, the
for 8 bit translation becomes non linear.
mode)
Values <1 selectively reduce dim signal intensities.
Values >1 selectively amplify dim signal intensities.
Info
Activate both checkboxes if you want the trigger signal to be generated both during the live
image and during acquisition.
Parameter Description
Enable for Snap Activated: Generates the trigger signal during the acquisition of an
image.
Enable for Live Activated: Generates the trigger signal during the live image.
Control Signal
- Active High The Control Signal jumps from 0 Volts to 5 Volts when the camera's
exposure begins. Following exposure it returns to 0 Volts.
- Active Low The Control Signal jumps from 5 Volts to 0 Volts when the camera's
exposure begins. Following exposure it returns to 5 Volts.
Shutter Open De- Here you can enter the delay before acquisition.
lay
Trigger In section
The trigger input allows you to trigger acquisition by the camera using an external trigger signal.
Info
Due to its inertia, a mechanical shutter needs a certain amount of time to change from the
closed to the open position after the control signal has been generated. To ensure that this
transitional state is not recorded during the exposure of the sensor, the start of actual acquisi-
tion can be delayed by an adjustable period of time.
Parameter Description
Enable for Snap Activated: Only acquires the image after the trigger signal has been
received.
Control Signal
Parameter Description
- Active High The Control Signal jumps from 0 Volts to 5 Volts when the camera's
exposure begins. Following exposure it returns to 0 Volts.
- Active Low The Control Signal jumps from 5 Volts to 0 Volts when the camera's
exposure begins. Following exposure it returns to 5 Volts.
Reset button
Resets all entries to the original values.
See also
2 AxioCam ICc5 [} 913]
2 AxioCam MR [} 912]
2 AxioCam ERc5s [} 914]
2 AxioCam HR [} 911]
2 Axiocam 503/506/512 [} 904]
2 Axiocam 105 [} 913]
2 Axiocam 305/702/705/712 [} 906]
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
Parameter Description
– Rotate 90 CW
– Rotate 90 CCW
– Rotate 180
– Mirror at +45
Diagonal
– Mirror at -45
Diagonal)
Acquire section
Parameter Description
Readout Speed Readout speed can be varied between 39 MHz and 13 MHz if the
(MHz) camera is operated via the USB3.0 bus. In case the camera is con-
nected to the slower USB2.0 bus, only the 13 MHz mode is available.
At the slower 13 MHz mode, the signal quality is slightly improved
due to a reduced noise of the signal transmission.
Readout Port The Axiocam 503/506/512 uses a high performance CCD sensor
with four readout ports. It can be adjusted to quadport, dualport, sin-
gleport and Auto mode. Maximum speed is reached by using all four
ports and short exposure times. When exposure time gets larger than
the readout time, the benefit for using multiple ports is getting in-
significant. By switching the readout mode to single port, the most
homogenous signal quality can be reached as all data is sent through
one single processing chain. In Auto mode, the number of used read-
out ports is selected automatically depending on the exposure time.
Readout Time The valid camera readout time (in ms) is defined by the number of
used ports or by defining a sensor sub region window (ROI).
Parameter Description
SW Subsampling The software subsampling feature is used to reduce the image size
which helps to reduce the amount of the image data load in case of
long time lapse or multi-dimensional images. As this is done by soft-
ware processing, the full image is acquired and prepared before it is
downsized by this function. Especially in high magnification objectives
at lower NA values, the image resolution is optically limited. Hence,
this function can be used to minimize empty image information. This
feature works with all Axiocam 3 series, 5 series, and 7 series
models.
Temperature section
Parameter Description
Temperature The valid sensor temperature is shown here. It is adjusted to 18 C°. It
can not be changed. If a black reference is used it should be used at
the same sensor temperature when it was created.
If free air circulation for the camera housing is blocked, the sensor
temperature may be increased and the dark current of the sensor may
be higher than normal. If the camera is operated without cooling
(USB 2.0 port of camera not connected), the sensor temperature is in-
creased and dark current will be higher than normal. This should be
considered when using the camera with longer exposure times.
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
– Rotate 90 CW
– Rotate 90 CCW
– Rotate 180
– Mirror at +45
Diagonal
– Mirror at -45
Diagonal)
Acquire section
Parameter Description
Cooling Status information if camera cooling is active. While Axiocam 305 is
only temperature-stabilized, the Axiocam models of the 7 series are
temperature-stabilized and can be operated with active cooling. Cool-
ing is deactivated if the USB2.0 connector of the camera is not con-
nected to the PC or to a USB compatible power supply.
Readout Time The valid camera readout time (in ms) is defined by the number of
used ports or by defining a sensor sub region window (ROI).
Polarization section
Only available for Axiocam 705 pol.
Parameter Description
Live Polarization Only available in B/W mode.
Displays the polarization channel in the live image. The given angles
correspond to the polarization filter direction sensor.
– 0°
– 45°
– 90°
– 135°
– Simplified Color Ensures the fastest possible representation of the live image by choos-
Coding ing a faster calculation path for the color representation.
Parameter Description
Display HLS im- Only available in RGB mode.
ages Switches the display of the partial images with the angle of polariza-
tion (AoP), degree of polarization (DoLP) and intensity information on
or off. If the checkbox is deactivated, only the finalized pseudo-color
image is provided.
For details, see Color Mode for Axiocam 705 pol [} 908].
Parameter Description
SW Subsampling The software subsampling feature is used to reduce the image size
which helps to reduce the amount of the image data load in case of
long time lapse or multi-dimensional images. As this is done by soft-
ware processing, the full image is acquired and prepared before it is
downsized by this function. Especially in high magnification objectives
at lower NA values, the image resolution is optically limited. Hence,
this function can be used to minimize empty image information. This
feature works with all Axiocam 3 series, 5 series, and 7 series
models.
Temperature section
Parameter Description
Temperature The valid sensor temperature is shown here. It is adjusted to 18 C°. It
can not be changed. If a black reference is used it should be used at
the same sensor temperature when it was created.
If free air circulation for the camera housing is blocked, the sensor
temperature may be increased and the dark current of the sensor may
be higher than normal. If the camera is operated without cooling
(USB 2.0 port of camera not connected), the sensor temperature is in-
creased and dark current will be higher than normal. This should be
considered when using the camera with longer exposure times.
B/W mode
In B/W mode, an image acquisition produces an image with four channels. Each channel corre-
sponds to one of the polarization directions. The channels can be viewed individually or as an
overlay image. It is possible to assign a pseudo-color to the individual polarization channels and to
view them as an overlay image. In doing so, it is helpful to adjust the characteristic curves for dis-
playing the images channel by channel.
The channels are displayed in the Dimensions tab and can be viewed individually via the Single
Channel checkbox.
RGB mode
In RGB mode, the polarization information of the camera is interpreted as a color image. In addi-
tion, the underlying information is available as further channels with the angle, the degree, and
the brightness of the image. This conversion of the polarization channels into these derived pa-
rameters is based on the theory of Stokes parameters which can be used to describe polarization.
These can be derived directly from the directions of the polarization filters from the sensor.
§ S0 = I(0) + I(90)
§ S1 = I(0) - I(90)
§ S2 = I(45) - I(135)
The following values are calculated by using the aforementioned parameters:
This value is independent of intensity because it represents the angle of polarization at an im-
age location from 0 to 180 degrees. This channel can appear noisy when the camera is at low
saturation. In the pseudo-color image, this information is assigned to the color tone (hue). If
the light intensity is low, more noise is visible in the image. The brightness only changes de-
pending on the distribution of the polarization angle of the light.
Note: Circular polarization cannot be detected with this camera.
The angle of polarization is assigned to a color value with the maximum value for AoP being
mapped to the numerical value of 16383 (214).
The live image is visualized via pseudo-color encoding of the polarization channels. Following en-
coding formula is used:
§ R = I(0)
– I(0) = intensity of polarization direction 0°
§ G = (I45 + I90)/2
– I(45) = intensity of polarization direction 45°
– I(90) = intensity of polarization direction 90°
§ B = I(135)
– I(135) = intensity of polarization direction 135°
9.3.7.9.3 AxioCam HR
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
– Rotate 90 CW
– Rotate 90 CCW
– Rotate 180
– Mirror at +45
Diagonal
– Mirror at -45
Diagonal)
Acquire section
Parameter Description
Readout Speed Here you can select the orientation of the camera image. This allows
(MHz) you to adjust the camera image to the properties of the different
camera ports.
– High Speed The readout speed is 24 MHz mode and the digitization accuracy is
set to 12 bits per pixel. This mode offers advantages if sufficient light
is available and situations need to be acquired quickly, for example
fast time series or tile images.
– High Accuracy The readout speed is 12 MHz and the digitization accuracy is set to 14
bits per pixel. This mode offers advantages if very little light is avail-
able and you want the camera to acquire very weak signals just above
the camera's noise level.
9.3.7.9.4 AxioCam MR
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
– Rotate 90 CW
– Rotate 90 CCW
– Rotate 180
– Mirror at +45
Diagonal
– Mirror at -45
Diagonal)
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
– Rotate 90 CW
– Rotate 90 CCW
– Rotate 180
– Mirror at +45
Diagonal
– Mirror at -45
Diagonal)
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
Parameter Description
– Flip Horizontally
– Flip Vertically
– Rotate 180
Acquire section
Parameter Description
Gain Boost If activated, the image signal is amplified so that the image becomes
brighter. The gain factor is 1.7x. This factor is in addition to the stan-
dard gain control.
Parameter Description
Camera Identifier A clear identifier for the currently active camera is displayed here. It
consists of the product name and part of the serial number. This is
helpful if you want to identify the camera in question, e.g. when
working with a dual camera system.
Sharpness section
Parameter Description
Sharpness Increases the impression of sharpness in an image.
Orientation section
Parameter Description
Orientation Here you can select the orientation of the camera image. This allows
you to adjust the camera image to the properties of the different
camera ports.
– Original
– Flip Horizontally
– Flip Vertically
– Rotate 180
In the Channels tool you can configure channels for Widefield acquisition. The tool offers you the
option of entering the hardware settings for acquisition manually or performing the configuration
automatically. Its is not recommenced for LSM channels as you still need to open up Imaging
Setup or use Smart Setup to chose specific channels and adapt the spectral range for detection.
If there is no channel available, you can add a channel to the experiment by clicking on the corre-
sponding button (e.g. Airyscan, LSM (various modes), or Widefield).
1 Channels list
Displays the selected channels of your experiment, see Channels List [} 916].
3 Channel settings
Displays settings for your channels, see Channel Settings (Widefield) [} 920]/Channel
Settings (LSM) [} 922].
See also
2 Channels Tool (Axioscan Specific) [} 1151]
Info
If you select a dye or contrast technique in the Add Dye or Contrast Method dialog, a sug-
gestion for the hardware settings for the acquisition of this channel is made automatically. If
no suggestion can be made, a channel without hardware settings is added. You then see a
corresponding indication in the status area of the program interface.
Parameter Description
Moves the selected channel one row down.
Down
Delete
Focus Ref. Sets the selected channel as reference channel for focus actions or
stitching during acquisition.
– Add New... If Widefield channels are added, this function opens the dialog to
add more widefield tracks, see Add Dye or Contrast Method Dialog
[} 869].
If LSM or Airyscan channels are added, this function will add a new
LSM or Airyscan track.
– Duplicate Creates a new track with the same settings and dye as the currently
selected track.
– Rename Assigns a new name to the channel of the currently selected track. To
change the name of the track, you can directly click into the respec-
tive field of the channels list.
Parameter Description
– Reset Color Resets the color of the selected track(s) to default.
– Set as Refer- Defines the selected channel as the reference channel for focusing ac-
ence Channel tions.
Note that, you can also set Channels of inactive Tracks as Reference
Channels. The Autofocus will in this case be performed on this chan-
nel, but the Channel or Track will not be part of the resulting image
document. Using this approach, e.g. Camera tracks may be used for
fast focusing within a Confocal experiment, while the acquisition of
Camera and Confocal into one image is not possible.
Note: Do not use a Lamdba or Online Fingerprinting track as refer-
ence track for SW Autofocus as the focus search will be extremely
slow.
– Compare... Opens the Compare channels dialog, where the active channels are
displayed horizontal so you can easily compare and adjust key param-
eters of the active tracks.
Parameter Description
High Intensity Activated: Uses a high intensity laser range , where you can adjust
Laser Range the lasers between 0.2 and 100% of their power. This is especially rel-
evant for bleaching experiments.
The setting affects all tracks. While switching, the system is trying to
keep the intensities at a similar level. If the currently selected intensity
is outside of the overlapping range (0.2% to 3.5-5%) the closest pos-
sible value is used.
Note: This section is only available with LSM 900.
The LSM is working by default in a laser power range between 0.01%
and 3.5-5% of the available laser intensity. The available minimum
and maximum in the default range is depending on the laser.
Lasers Select the laser lines needed for sample illumination for the current
track.
Activate the required lasers by activating the corresponding checkbox.
The laser lines along with sliders will appear. Set the required attenua-
tion (%) using the sliders, the arrows, or typing a number into the in-
put field. In case the laser is not yet ready for operation (see also
Lasers Tool [} 1300]), the checkbox and text are displayed in red. For
tunable multiphoton lasers (not available for Celldiscoverer), an edit-
ing field is displayed. Write the desired wavelength into the editing
box to tune the laser to this wavelength. For multiple tracks the actual
tuning is immediately done for the first active track. For subsequent
tracks the tuning happens whenever the previous tracks are deacti-
vated or deleted or during the actual experiment. In case a tunable
and a fixed line are available, the tuning range has a gap which is de-
fined by the filter combining the tunable and fixed line to get both
beams onto the same optical path.
Relative Laser Displays the relative percentage of the laser light that is applied to the
Power (in Confocal sample, see Display of Relative Laser Power for Different Magnifi-
Mode) cations [} 919].
Parameter Description
The Airy Unit size and, derived from this, the section thickness of the
optical slice is determined by the emission detection range. Diameter
Pinhole = 1.22 x (detection wavelength/numerical aperture). For the
calculation of the detection wavelength the center of the emission
range is taken. Laser lines set within the detection range reduce the
detection range (lower border determined by the laser line) and shift
the center point accordingly. Without any defined detection range
(i.e. no emission filter) the system takes the center of the potential de-
tection range defined by the hardware parameters of the detector.
Within one track the lowest detection range is taken for the calcula-
tion.
When increasing the pinhole diameter an information mark might ap-
pear. It signals when at the given optical parameters the resolution in
Z is no longer optimal. The tool tip showing with the information
mark informs about what is the best step size for Z stacks to not loose
resolution. For LSM confocal tracks using NLO lasers (not supported
for Celldiscoverer), the section thickness for excitation is calculated
based on a formula leaving out the settings of the pinhole, Hence no
Airy Unit value is displayed. These tracks should always be setup with
a completely open pinhole.
The control is not available for Airyscan and NDD tracks. For Airyscan
tracks the physical pinhole is automatically opened and set to the op-
timal diameter. NDD tracks do not have a pinhole.
- Max Opens the pinhole to its maximum diameter. This can be useful to
find the focal plane and is the recommended setting when using a
multiphoton laser for excitation.
When using certain objective/ magnification changer combinations (e.g. the Celldiscoverer 7 con-
taining 4 objectives and 3 magnification changers), only a portion of the laser light is applied to
the sample due to the optical design of the system, see the table for applicable laser power be-
low. This information is displayed as Relative Laser Power (for LSM tracks) and Relative Laser
Power in Confocal Mode (for Airyscan MPLX tracks) in the Channels tool. The value is based on
the currently used hardware settings defined in the Right Tool Area and does not reflect the be-
fore/after settings in experiments and experiment blocks.
Example: At 5x magnification the laser power was set to 1%. After changing to 20x/0.95 magni-
fication the laser power can be increased with the slider to set the relative laser power again to
1%.
Calculation
The laser power is calculated in the following way:
§ For confocal tracks: Relative laser power (%) = applicable laser power depending on objective/
magnification changer * adjusted laser power
§ For Airyscan MPLX tracks: Relative laser power in confocal mode (%) = relative laser power *
66%
The settings always relate to the channel you have selected in the Channels list. To show the set-
tings for all channels, click and select Select All in the Channels list.
Parameter Description
Dye Name In the input field after the selected dye you can enter an additional
name.
Camera Select the desired camera for the channel from the dropdown list.
Auto Exposure Activated: Automatically determines the camera's exposure time for
the selected channel. The value set manually is ignored.
Set Exposure Starts an exposure time measurement for the channel. After the mea-
surement the value is adopted as the exposure time setting.
Time Adjust the exposure time for the camera using the slider or spin box/
input field. Select the unit of time from the dropdown list at the right
of the spin box/input field.
Parameter Description
Shading Correction Activated: Uses the calculated shading correction for this channel. To
learn more about shading correction, see also Post Processing Sec-
tion [} 897].
- Define Automatically calculates the shading correction for the selected chan-
nel.
For ZEN Celldiscoverer, the shading reference is created and automati-
cally saved in the default collection of the shading management.
EM Gain Only visible for EMCCD camera models. Sets the EM gain value.
Z-Stack Mode Only visible if the Z-Stack checkbox is activated in the Experiment
Manager and the Show All mode is active.
- Single slice Acquires a single slice of the Z-stack only. Select the single slice in the
only input box under the list. If the Use center checkbox is activated, the
center focus plane is used for acquisition.
- Single slice, Acquires an image of a single slice of the Z-stack only. All other Z-
rest black slices of the stack are filled with black images. Select the single slice in
the input box under the list. If the Use center checkbox is activated,
the center focus plane is used for acquisition.
- Fill with single Acquires an image of a single slice of the Z-stack only and fills all
slice other Z-slices with this slice. Select the single slice in the input box un-
der the list. If the Use center checkbox is activated, the center focus
plane is used for acquisition.
Live Denoising Only visible if you have a denoising model available in ZEN.
Activated: Uses a denoising model during a Continuous acquisition.
The model can be selected from the dropdown, see also Using De-
noising During Continuous Acquisition [} 62].
The settings always relate to the channel you have selected in the Channels list. To show the set-
tings for all channels, in the Channels tool, click and select Select All.
If you are using FCS for LSM 980, see Channels Tool - Measurement Settings [} 1243] for specific
information.
Parameter Description
Master Gain Slider and editing box to control the voltage of the PMTs. Increasing
the gain of the PMT corresponds to a higher voltage of the detector.
The image becomes brighter and you may be able to reduce the laser
power. At higher voltage, the noise level in the image increases as the
dark noise of the detector becomes visible in the images predomi-
nantly as single bright pixels.
The optimum between gain and noise depends on your experimental
requirements and on your sample. The maximum available voltage de-
pends on the type of the detector and is 1200V for multialkali PMTs,
900V for GaAsP PMTs and 1000V for Airyscan. GaAsP PMTs and
Airyscan have a minimum voltage of 500V.
Mode For specific GaAsP channels and one cooled Multialkaline PMT of the
LSM 980, the detectors can be run in photon counting mode. This op-
tional mode is activated by clicking the button Photon Counting.
In photon counting mode no Digital Offset and Digital Gain slider is
available. Master Gain is set to a calibrated value to have optimal set-
tings for photon counting. Offset must not be changed for this mode
either.
In photon counting mode an artificial maximum of detectable pho-
tons is set. For all GaAsP detectors and the side PMT (Ch2) this value
is defined by CRmax/digital gain (CRmax = maximal count rate) de-
fined by the software, which is 4 MHz for the BiG.2 and Ch2 and 2
MHz for Chs (32 channel GaAsP array). Higher count rates are re-
garded as saturation.
Parameter Description
Note that too much light on the detectors (higher count rates for
some ms) leads to detector shut down to avoid damage.
Note that images acquired in photon counting mode are displayed in
their original count rate per pixel and hence will appear fairly dark
compared to intensity mode images. Adjust the display to improve the
signal visibility.
Z-Stack Mode Only visible if the Z-Stack checkbox is activated in the Experiment
Manager and the Show All mode is active.
- Single slice Acquires a single slice of the Z-stack only. Select the single slice in the
only input box under the list. If the Use center checkbox is activated, the
center focus plane is used for acquisition.
- Single slice, Acquires an image of a single slice of the Z-stack only. All other Z-
rest black slices of the stack are filled with black images. Select the single slice in
the input box under the list. If the Use center checkbox is activated,
the center focus plane is used for acquisition.
- Fill with single Acquires an image of a single slice of the Z-stack only and fills all
slice other Z-slices with this slice. Select the single slice in the input box un-
der the list. If the Use center checkbox is activated, the center focus
plane is used for acquisition.
In this tool you can see the temperature status of the Linkam Cryo stage. When the stage is se-
lected in the MTB configuration and properly connected to the computer, the temperature is
logged beginning with the ZEN startup. This logging of data is asynchronous and runs as long as
ZEN is active.
Note that if the stage is physically disconnected from the computer, only the last received temper-
ature is logged. Only after a restart of ZEN the current temperature is again logged in a new log
file.
Parameter Description
Temperature Sta- Displays the current temperatures.
tus
Parameter Description
Temperature Log-
ging
– Open record Opens the record for the logged temperatures. Current opens the
current log file in ZEN as a .czt file.
Folder opens the folder where the log file is saved on your computer
(C:\Users\user\Documents\Carl Zeiss\ZEN\Documents\TemperatureLog-
ging).
– Logging/ Start Shows that the temperature is logged. Click on Logging to stop log-
Logging ging the temperature. The button changes to Start Logging.
Click on Start Logging to continue the logging of thee temperature.
Image Tempera-
ture Data
Parameter Description
Status Displays the actual status of the device, e.g. Standby or Monitoring.
Find Surface Tries to find the surface of the cover glass and adjusts the focus posi-
tion accordingly.
If the signal is not strong enough, the actual focus position remains.
Store Focus Sets (and saves) the current focus position as the stabilizing position.
Note that if you change an objective, the saved position will be
deleted.
Lock Focus If activated, Definite Focus holds the current focus distance by starting
a continuous focus stabilization. After activation, Lock Focus for Def-
inite Focus 3 will stay active during Live, Continuous, also after man-
ual refocusing, Snap (stops during acquisition and is switched on
again), 2x2/3x3 Snap, as well as experiments without focus strategy,
Z-Stack, Fast Acquisition (triggered), Mixed Mode (for Celldiscoverer
7), bleaching and channel offset. Lock Focus is switched off with
ZEN shut down, but can be switched on via the TFT display, if neces-
sary.
The checkbox to activate the tool is only visible if Time Series acquisition is active and Tiles is not
activated.
Using the tool you can open the Mean ROI Setup where you can configure physiology functions
for a Time Series experiment. Deactivate the Dynamics checkbox to deactivate physiology func-
tions.
If you click on the Mean ROI Setup button the setup view will be visible in the Center Screen
Area. There you will see a snapshot containing all channels of the currently active image.
The Mean ROI Setup is in essence a modified version of the MeanROI view with several minor
additions. It allows experiment pre-settings to be made on Snap-shots of the cells/ specimen on
which the measurements will be made. These settings include:
See also
2 Mean ROI Tab [} 568]
2 Layout Tab [} 569]
2 Charts Tab [} 570]
2 Online Ratio Tab [} 572]
2 Mean ROI View [} 566]
Here you can find an overview of your experiment parameters, e.g. the memory requirement of
the experiment or its duration.
Parameter Description
Required Disk Space Indicates the calculated memory space that the experiment
will take up on your hard drive. All the activated blocks of
an experiment created using the Experiment Designer are
taken into account.
Duration (Theoretical) The system adds together all the exposure times arising
during acquisition in the experiment and indicates this
value. In the case of time series the intervals set are also
taken into account. The actual acquisition duration will al-
ways turn out longer, however, as switching times for com-
ponents (diaphragms, reflectors) and positioning times (Z-
plane, stage position) also come into play.
Maximum Acquisition Rate If the Time Series acquisition dimension is activated in the
Experiment Manager, you can measure the maximum
possible frame rate of the system in the Time Series tool.
In that case the frame rate is shown here. Otherwise "not
Parameter Description
available" is displayed. After any change is made to the ex-
periment the frame rate must be determined again in the
Time Series tool.
Elapsed Time (Last Experi- If you have already run the current experiment before on
ment) the system, the duration actually required for it is displayed
here. This information disappears again if you change the
experiment.
Tile Size Shows the X/Y dimensions of your experiment. In the case
of a single position this value is identical to the size of the
camera field.
This tool allows to define Regions of Interest (ROIs) which are used for image acquisition, sample
manipulation (bleaching) and image analysis.
If you are using FCS for LSM 980, see the chapter Overview Acquisition Tools [} 1242] for FCS
specific options in the Experiment Regions tool.
Fig. 69: Experiment Regions Tool (Show All) - Enable Import deactivated and activated
Parameter Description
Enable Import Activate this option to show additional controls for the import of Re-
gions which were created by an Image Analysis workflow upfront.
This option is only available in Show all mode.
Parameter Description
Import as shape: Select here, which type of Shape the imported. The shape of the cre-
Toolbar ated Experiment Region can either be a polygon, a simplified polygon
or a rectangle or ellipse with the approximate size of the bounding
box from the analysis region.
Alternatively you can define a custom rectangle or ellipse with specific
size and an offset relative to the center of the bounding box of the
analysis result.
Import regions as When you press this button, the currently selected regions from an
Image Analysis result will be imported into the list of Experiment Re-
gions. The properties, e.g. if the regions will be used for bleaching
and analysis, are defined by the neighboring checkboxes.
Color Select here the color which will be used to indicate the imported re-
gions in the image.
Toolbar Use the tools from the toolbar to draw ROIs into the image.
Edit Regions in To add a new Experiment Region to your experiment, you need to
Current Image click the Edit Regions in Current Image button first. The button will
turn to blue to indicate that Experiment Regions are displayed in the
image and can be added or manipulated. When the button is not en-
abled, all standard graphical elements are shown in the image, but
not Experiment Regions. Standard graphical elements are used for im-
age analysis and annotation, but will not have an impact on the next
image acquisition.
Experiment regions which are used e.g. for bleaching, are automati-
cally converted into standard graphical elements while the experiment
is performed.
It is not possible to edit Experiment Regions when the image is dis-
played in certain viewers. Switch back to the 2D view, in case the but-
ton is disabled.
Keep Tool Activated: Keeps the selected graphic tool for multiple actions.
Auto Color Activated: Assigns different colors for each drawn ROI.
Only available in Show all mode
- Analysis Activated: Uses data for Mean of ROI analysis in the MeanROI view
from the corresponding ROI(s) .
Parameter Description
Save Saves the selected ROI(s) to the file system.
The following parameters are only visible in the Show All mode.
Dimensions
- W/H Shows the Width (W) and Height (H) of the selected ROI.
Enter new values in the input fields.
- Position regions The experiment region is placed in relation to the current field of
relative to im- view. Use this option if the regions should maintain their position
age within the image in case the stage is moved.
When this option is deactivated, the regions are defined by their stage
coordinates. In this case, regions can be placed on certain structures
of your sample and the regions will stay there when the stage is
moved.
Note, that the system will not attempt to automatically move to these
positions e.g. during bleaching experiments, if they are not accessible
in the current scan area.
Fit frame size to Activated: Fits all ROIs that are marked as Acquisition ROIs in the ta-
bounding box of ble to the total frame that the scanner will cover.
acquisition regions
This can decrease imaging time, since the scanner has not to move
over the complete frame of the original image, in which the ROIs
were drawn.
While the frame size is reduced, the scan speed will maintain its value
as configured in the Acquisition Mode tool. In order to achieve maxi-
mum acquisition speed, you should use the Acquisition Mode Tool
to manually reduce the frame size and adjust the speed correspond-
ingly.
Info
The software automatically selects the most appropriate focus strategy, e.g. Use Z Values/Fo-
cus Surface defined in Tiles Setup when you activate the tiles dimension and no focus strat-
egy has been previously selected.
Here you can select the focus strategy that you want to apply. The strategies that are available de-
pend on the dimensions selected (z-stack, time series, tiles), the hardware devices present (e.g.
Definite Focus) and software licenses. In general, focus strategies determine and/or update a Ref-
erence Z-Position, which in most cases is used directly for acquisition.
Exceptions
§ When z-stacks are acquired, the center of the z-stack determines the Reference Z-Position.
§ Defined offsets for channels and z-stacks shift acquisition in relation to the Reference Z-Posi-
tion.
§ If two focusing methods are combined, the Reference Z-Position of the first method is used as
the starting point for the subsequent method.
Parameter Description
Focus Strategy
Wizard
– Optimize this Opens the Focus Strategy Wizard to optimize this focus strategy used
focus strategy for your tiles experiment.
– None This is the default setting for all experiments that do not include a
tiles dimension (in which case the software would automatically select
the Use Z Values/Focus Surface defined in Tiles Setup strategy).
The current z-position at the time the experiment is started is set as
the Reference z-Position and remains unchanged during the experi-
ment.
Exception:
By default, z-stacks are acquired at the fixed Reference Z-Position that
has been defined as the center in the Z-Stack tool. You can change
this setting in the Z-Stack Acquisition section of the Focus Strat-
egy tool.
– Absolute Only available if your licenses include the Tiles & Positions function-
Fixed Z Posi- ality.
tion A focus strategy that makes use of the z values defined for tiles and
positions when using a motorized stage.
– Software Aut- Only available if your licenses include the Software Autofocus func-
ofocus tionality.
Note that any focus surface/z-values defined in the tiles setup are ig-
nored if this strategy is selected.
The focus position is determined via the sharpness calculation or in-
tensity calculation of a series of images (z-stack) and set as the Refer-
ence Z-Position. The settings are configured in the Software Autofo-
cus tool.
– Definite Focus Only available if your microscope system has attached a Definite Fo-
cus device.
Note that any focus surface/z-values defined in the tiles setup are ig-
nored if this strategy is selected. Definite Focus attempts to maintain a
certain distance to the cover glass of the sample in order to compen-
sate for mechanical and thermal movements. The Definite Focus is ini-
tialized at the start of the experiment by setting the current distance
as the reference distance. You have to define this value at the start of
the experiment.
When the focus is stabilized during the experiment, the current dis-
tance is adjusted to the reference distance. This is achieved by moving
the focus drive accordingly. The new z-position resulting from this is
used as the Reference Z-Position for acquisition. The repetitions and
Parameter Description
frequency of these events is performed with predefined standard set-
tings. If you want to adjust these for a particular experiment, select
the Expert mode (only visible in Show all).
§ Definite Focus Recall: The initial z values are determined by re-
calling the previously stored focus (distance to cover glass surface).
§ Definite Focus Stabilize: Stabilizes the last valid reference z-posi-
tion (distance of the sample to the cover glass).
§ Definite Focus Find Surface: Finds the focus position on the sur-
face of the cover glass.
– Use Z Values/ Only available if your licenses include the Tiles & Positions function-
Focus Surface ality.
defined in This strategy is selected automatically when the Tiles dimension is ac-
Tiles Setup tivated and no previous strategy was selected. With the Tiles & Posi-
tons functionality, a focus surface can be defined in two ways: Local
(for tile regions and/or positions) or Global (based upon a carrier e.g.
petri dish, slide, plate).
Software Autofo- Only available if Combine Software Autofocus and Definite Focus
cus as Reference is selected.
for Definite Focus Software Autofocus moves the focus drive to the focus position that
has been calculated. Taking this as the starting point, a new reference
distance is defined for the next distance stabilization performed by
Definite Focus. This can reduce the likelihood of a stabilization failure
when the sample is long and elongated and the carrier possibly tilted.
Definite Focus as Only available if Combine Software Autofocus and Definite Focus
Start for Software is selected.
Autofocus Uses the last valid Reference Z-Position defined by Definite Focus as
the starting position for the Software Autofocus search. This allows
you to optimize the search range and step size of Software Autofo-
cus.
Definite Focus Sta- Only available if Combine Software Autofocus and Definite Focus
bilize as Start for is selected.
Software Autofo- Takes the last valid reference z-position defined by Definite Focus as
cus the starting center position for the Software Autofocus search.
Definite Focus Find Only available if Combine Software Autofocus and Definite Focus
Surface as Start is selected.
for Software Auto- Takes the calculated focus position on the surface of the cover glass
focus as the starting bottom position for the Software Autofocus search.
Parameter Description
Initial Definition By default this is given by the Tile Setup (user defined values from
for Z Values/ Focus the Tiles tool) from the support points/positions/tile regions list.
Surface It is also possible to use the Software Autofocus function or the
Definite Focus. The Recall Focus function initially defines these value
prior to the start of the image acquisition. The resulting z-values can
overwrite the existing listed values in the support points/ positions list.
Z Values/Focus
Surface
For Positions:
For positions the local focus surface is defined by the discrete Z-value
assigned to it. A position cannot have support points.
– Global (Carrier A global focus surface is defined based on a selected carrier template.
based) To create a global focus surface you need to add support points by
creating or editing a carrier template from the appropriate section of
the Tiles tool. Thus, a group of support points are used to help de-
scribe the tilt or curvature of the carrier (again by a process of interpo-
lation). Tile regions or position "placed" upon this global focus surface
are mapped onto it accordingly.
Adapt Z Values/Fo- Activated: Adapts the focus surface/z-values by the following options
cus Surface (if available):
– Definite Fo- Here you select if you want to adapt the settings by using Definite Fo-
cus/Software cus or Software Autofocus (SWAF). Depending on this selection and
Autofocus the available dimensions of your experiment the following additional
functions can be selected from the second dropdown list:
– SWAF/As ad- The Reference Z-Position calculated from the Local Focus Surface is
ditional action used as the starting point for an additional software autofocus search
which updates the Reference Z-Position.
This allows you to reduce the search range and/or step size of the
Software Autofocus (faster).
Parameter Description
This strategy is only relevant if Tiles and Time Series experiments are
combined. Again the repetition and frequency of the software autofo-
cus stabilization can be modulated to meet the experiment needs.
– Definite Fo- The Reference Z-Position calculated from the Local Focus Surface is
cus/As addi- used as the starting point for a Definite Focus stabilization, which up-
tional action dates the Reference Z-Position (adjusts this to a single stabilization
offset, defined at the start of the experiment such that the distance
between the coverslip and objective will be the same).
This method only makes sense in the context of a very thin sample,
where the sample lies at a constant close distance to the interface.
For Definite Focus. This also reduces the likely-hood of a stabilization
failure.
– Definite Fo- This option is available for Time Series experiments only.
cus/Update This option allows a focus surface (local or global) to be updated by
with single the activity of a Definite Focus stabilization. The focus surface is regu-
offset larly adjusted by means of a definite focus stabilization, which is per-
formed exclusively at a single defined waiting position. A resulting
correction of the Reference Z-Position is adopted for all focus areas.
This strategy is only relevant if Tiles and Time Series experiments are
combined. Again the repetition and frequency of the definite focus
stabilization can be modulated to meet the experiment needs.
– Reflex AF/Up- This option is only available for Time Series experiments for Lattice
date with sin- Lightsheet.
gle offset This option allows the Reference Z-Position defined in Tiles Setup
to be updated by the activity of an additional autofocus based on
finding the cover glass reflex. A resulting correction of the Reference
Z-Position is adopted for all focus areas.
Stabilization Event Only available for Definite Focus in combination with Tiles or Time
Repetitions and Series experiments.
Frequency
- Standard This mode uses default settings for stabilization which we recommend
to use if you are not familiar with the Definite Focus device.
Parameter Description
- Expert This mode allows advanced settings for using Definite Focus stabiliza-
tion.
Under Synchronized with Image Acquisition you can select how
Definite Focus is used:
§ Time Series: If activated, this setting repeats Definite Focus at cer-
tain predefined points within a image acquisition loop (e.g. every
second time point).
§ Tile Regions/Positions: If activated, this setting repeats Definite
Focus at certain Region/Positions (e.g. every second position).
Optionally, you can also select to run the stabilization for each tile
region at either the center (Center of Regions) or the first ac-
quired tile of the region (First Tile of Regions).
§ Tiles: If activated, this setting repeats Definite Focus at a certain
Tile (e.g. every second Tile).
Under During Time series interval you can enable Periodic Stabiliza-
tion.Periodic Stabilization is available for experiments that include
Time Series only. If it is activated, a stabilization with a defined Pe-
riod (e.g. every 10 s) is performed. This mode is useful if long intervals
are needed between image acquisition loops. This mode can be com-
bined with the stabilization events before discrete imaging loops.
Reference Channel Displays a table with the currently defined channels. The column Off-
and Offsets set sets an offset in µm. The button Set as Reference Channel sets
the currently selected channel as reference.
This wizard guides you through the setup of a suitable focus strategy for your experiment. Cur-
rently this wizard is only available to help you optimize the focus strategy for a tiles or positions
experiment. Therefore, the license for the Tiles & Positions module is necessary for this wizard.
Parameter Description
Next Moves on to the next step of the wizard.
Cancel Cancels the wizard. No changes are applied to your focus strategy
settings.
Parameter Description
Finish Saves the setup and the changes based on your progress and closes
the wizard.
Parameter Description
Best Practice Displays a list of best practices to consider before continuing with the
Checklist setup.
Reference Chan-
nels & Offsets
– Offset (µm) Sets and displays the offset for each channel.
The reference channel is used by the software autofocus. If your de-
sired focus in the specimen should be offset to the result of the soft-
ware auto focus, enter the relative values (in µm) here. It can be de-
fined for each channel (fluorophore) in the image independently. Off-
sets can be used without the software autofocus to create a relative
difference in focus for any channel to the reference channel.
Offset values can be determined outside the wizard. This is a manual
process specific for your given sample and application. Values in the
Channels tool or Focus Strategy tool will appear here and can be
modified.
This step helps you to determine initial focus values for you experiment. You can use the Soft-
ware Autofocus [} 935], the Definite Focus [} 935], or you can do it manually in the Tiles tool.
When you create/ add a position or tile region, it is always assigned a z-value that corresponds to
the current focus position. You can manually adjust these values directly in the respective section
of the Tiles tool [} 359]. For more information, see also Adjusting Z-Values [} 341]. Alternatively,
you can click on the Verify button to open the Verify Tile Regions or Verify Positions Dialog
[} 371] which helps you adjust the values.
If you select to apply the Software Autofocus (SWAF), ZEN uses the SWAF to determine the z-val-
ues for positions, tile regions or their support points. Make sure that your sample is suitable for
use of the SWAF which uses the settings of the reference channel.
Every experiment in ZEN blue has SWAF settings. They can be seen and modified in the Software
Autofocus Tool [} 333]. The default settings are usually a good starting point, but should be opti-
mized according to the sample. You can find a detailed description of the parameters in the help
chapter for the tool. The adjustments will help reduce the time it takes to run the search but still
reliably determine the focus.
See also
2 Determine Initial Focus Values Step [} 934]
If you select to apply Definite Focus.2 (DF.2), ZEN uses the DF.2 device on your Axio Observer to
determine the z-values for positions, tile regions or their support points. Make sure that the se-
lected objective and your sample are suitable for use of the Definite Focus.2.
In the first step of this wizard, ZEN indicates if the selected objective can be used. You should test
your sample with DF.2 beforehand, but typically the following guidelines can be given:
See also
2 Determine Initial Focus Values Step [} 934]
This step determines which focus surface is needed for the experiment based on the sample size
and distribution. For general information, see Focus Surfaces in ZEN [} 936]. The first two op-
tions require a local focus surface, which can be employed with or without a sample carrier. If
your sample corresponds to the third option, a global focus surface is necessary, which can only
be used and modified with a sample carrier template.
Parameter Description
Sample Carrier Selects a sample carrier and displays the name of the currently se-
lected one.
– Select Opens the Select Template Dialog [} 386] to select a sample carrier.
– Opens the sample carrier selection/editor dialog. Here you can edit
and add global support points to the selected sample carrier.
Edit Support
Points
– Deletes the selected sample carrier from the sample carrier field. The
template will still be available in the Select Template dialog.
Delete
Depending on the nature of your sample, its size, and its location on the sample holder, ZEN pro-
vides two options that employ a so called focus surface to help keep your sample in focus. A fo-
cus surface is basically a topographical map of a certain part of the sample. Like the contour lines
on a map, this tells the microscope for any XY coordinate on it how the objective should be set in
Z in order to make sure the image of the sample is sharp.
In principle, ZEN uses the same method regardless of the size of the sample. A focus surface can
simply be a single z-value assigned to your position or small tile region. For larger more complex
objects it is more sophisticated. For example, large tile regions created to image an entire tissue
section might have several hundred images that make it up. It is not plausible to have a single z-
value that defines the focus, neither is it efficient to define the z-value at each of these several
hundred images. In this case the focus surface is defined by a limited number of z-values at dis-
crete locations. The z-values of these “support points” are used to create a topographical map –
the focus surface - by interpolation. It covers the entire region you want to image and defines the
focus of a single image acquired on it. In this way huge areas can be imaged where every frame is
focused correctly.
ZEN defines two types of focus surface:
§ Local: Use this mode to image groups of positions, small tile regions or mixtures of both at
high and lower magnifications. It is also used to image large tiles or a number of large tiles
i.e. tissues sections or embryos. This is the standard setting for most imaging applications.
§ Global: Use this mode to image samples that are widely spread across your sample carrier.
The global focus surface is assigned to and defined on your sample carrier template. In this
manner it is typically used at lower magnifications when a very precise local z-value is not
needed to create a sharp image. In some cases this allows it to be used for more than one
sample carrier after another if they are mounted in a regular and consistent manner.
See also
2 Sample Size and Distribution Step [} 935]
This step is only available if at least a DF.2 or SWAF is configured. You are asked if you want/need
to adapt the focus during your experiment. If you have both Definite Focus and Software Autofo-
cus configured, you are also asked if the focus is static relative to the sample carrier. Depending
on your configuration and selection, the adaptation of focus values via Definite Focus or Software
Autofocus is displayed in the next step of the wizard.
Adaption of the focus values in your experiment might not always be necessary. Typically, if you
have followed the suggestions laid out at the beginning of the Focus Strategy wizard for best
practice and your experiment does not involve creating time lapse images of the specimen or long
extended periods of imaging (i.e. the sample is fixed or no longer living), you might not need to
adapt the focus at all during the imaging process. The z-values or focus surface values you define
will probably be stable enough that the images of the specimen will remain sharp.
However, in many cases, especially when living samples are involved, it is necessary to adapt your
z-values. Depending on the configuration of the system two approaches are available to adapt
your z-values:
§ Software Autofocus: This is a technique that adapts the focus based on taking a series of
images at planes above and below the current z-value. The system evaluates these images
based on contrast or intensity and then determines where the highest value lies. This plane
becomes the new z-value and in most cases it is the plane we are interested in or can be used
as reference for it. Z-stacks are centered relative to this value. This allows an adaptation of z-
values when the sample moves (e.g. growth, migration), but also if the sample and sample
carrier drift together. By its very nature this approach might need optimization to get better,
more reliable, and/ or faster results.
§ Definite Focus.2: This is a function to quickly determine the distance between your objec-
tives front lens and the sample. Using DF.2, only the drift of the sample and sample carrier to-
gether can be adapted. This can be done to restore or maintain this distance anywhere and at
any time. If necessary you can adjust this distance depending on the location in the sample
and how far into it the focus is required. Adaptation of the focus with Definite Focus.2 is very
quick and highly repeatable. As it is based on the physical principles of light refraction, it re-
quires certain characteristics of the sample and objective. ZEN informs you if the objective is
not suitable and adjusts the selection in the Focus Strategy wizard accordingly.
See also the chapter Determine Initial Focus Values (Definite Focus) [} 935] which gives
some basic guidance on the types of sample that can be used with the different objective
types.
See also
2 Adapting Focus Values Step [} 937]
This step guides you through the adaptation of focus values with the Software Autofocus. The op-
tions displayed here depend on whether the experiment is a time series or not, and also on the
type of sample which you have selected in the Sample Size and Distribution Step [} 935].
This step guides you through the adaptation of focus values with the Definite Focus. The options
displayed here depend on whether the experiment is a time series or not, and also on the type of
sample which you have selected in the Sample Size and Distribution Step [} 935].
This wizard step provides you with detailed information on how to set focus values manually. The
displayed information depends on the sample characteristics you have selected in the Sample Size
and Distribution Step [} 935].
CAUTION
Risk of Crushing Fingers
The drive of a microscope stage with a motorized vertical axis (focus drive) is strong enough to
crush fingers or objects between the stage and the microscope stand.
4 Remove your fingers or any objects from the danger area before moving the focus drive.
4 Release the joystick immediately to stop the movement.
This tool changes the vertical distance (i.e. Z direction) between stage and objective. This enables
you to focus the sample, or, for a sample with an uneven surface, to focus the area of interest.
1 2
Parameter Description
Current Displays the stage position in µm
Initially, when you use the Focus tool for the first time after switching
on the microscope, the exact position of the stage is not known.
Therefore, the position indicated by Current is initially set to zero. If
you enter a value, the stage moves by the entered amount relative to
the current position. If you want to move the focus to an absolute po-
sition, you must first click Home to move the focus to one of the end
positions. The value of Current is set to this known position. You can
then enter an absolute position.
Parameter Description
Position control Enables you to set the stage position. You can either use the Naviga-
tion Bar to move the stage up or down or you can enter the target
position in the Current input field.
Normal modes:
§ Inner segments: Slow
§ Outer segments: Medium
High-speed modes:
§ Inner segments: Fast
§ Outer segments: Very Fast
– Current Defines the target position of the stage in µm. The stage starts mov-
ing immediately after the coordinates have been entered and con-
firmed by pressing the Enter key or by clicking anywhere outside the
Current input field.
Step Size Defines the difference in µm by which the stage moves at each step.
Indirectly this defines the speed of the stage movement.
The Step Size also determines the accuracy of the focus position.
Home Moves the focus to one of the end positions. The value of Current is
set to this known position.
This ensures that the position shown as Current corresponds to the
actual stage position.
Work Moves the stage back to the position it was in before using the Load
button (i.e. the work position)
Parameter Description
If you have moved the stage (e.g. using the Navigation Bar) after
moving it into the load position, the work position is lost and the
Work button will not work.
Z-Position Specifies which position of the motorized z drive is used as the origin
(zero value)
– Set Zero Sets the current focus position as the origin (zero value)
Note that this tool is available only when you have licensed functionality for Image Analysis or
3D Image Analysis.
Parameter Description
Setting Selects the image analysis setting.
– New Creates a new analysis setting. Enter a name for the setting.
– New from Creates a new setting based on an existing setting. The template set-
Template ting will not be modified.
– Create new Reads out czias settings from a previously analyzed image and creates
from analyzed a new setting from that.
image
– Save Saves a modified setting under the current name. An asterisk indicates
the modified state.
Method Displays the defined segmentation method for the currently selected
Setting.
Parameter Description
– Opens the Segmentation Method dialog.
Edit
Edit Image Analy- Opens the Image Analysis Wizard to define a new analysis program or
sis Setting to change an existing program, see Image Analysis Wizard [} 941].
Start Interactive Runs the selected analysis setting with all the interactive steps.
Analysis Note that steps which you have not marked as interactive in the Im-
age Analysis Wizard are run with the values set in the analysis set-
ting. The program does not stop to allow you to adapt them interac-
tively.
See also
2 Image Analysis [} 402]
2 Creating a New Image Analysis Setting [} 403]
2 3D Image Analysis [} 525]
2 Creating a 3D Image Analysis Setting [} 525]
Parameter Description
Method Selects the method that is used for segmentation.
- Segment Region For each defined region class the specified image channel is seg-
Classes Indepen- mented using the class segmenter which can be defined in step
dently Automatic Segmentation of the image analysis wizard.
- ZOI (Zones of In- Constructs a zone of influence (ZOIs) and a ring around each pri-
fluence) mary object.
- 3D Segmentation Only available if you have licensed the 3D Image Analysis func-
tionality.
Uses a segmenter optimized for 3D analysis. For each defined re-
gion class the specified image channel is segmented using the
class segmenter which can be defined in step Automatic Seg-
mentation of the image analysis wizard.
This wizard guides you through the setup of an image analysis. It is only available if you have li-
censed functionality for Image Analysis or 3D Image Analysis. Note that some of the parame-
ters differ depending on whether you set up a 2D or a 3D analysis.
The following basic controls enable you to move through the steps:
Parameter Description
Next Moves on to the next step of the wizard.
Finish Saves the setup and the changes based on your progress and closes
the wizard.
9.3.17.1 Classes
In this step you can define the classes into which the measured objects in the image are divided.
Parameter Description
Interactive Activated: The class definition can be changed interactively while the
analysis setting is run with Start Interactive Analysis.
Classes List Displays the defined classes. If you create a new analysis setting, a
predefined set of classes is created automatically. The classes list dif-
fers if you have a 3D analysis setting.
For a 2D analysis, each class consists of two entries. The first entry
concerns the entirety of all objects belonging to the class. The second
entry represents the individual objects.
For a 3D analysis, each class consists only of one entry.
Add Class Adds a new individual class to the list on the base level.
Name Defines the name for the selected class in the list.
Note that you must not use the name Root for one of your classes as
this a reserved keyword.
Channel Selects the channel that is used for image segmentation of the se-
lected class in the Classes list.
Object Color Only visible if you have selected a class entry for individual object.
9.3.17.2 Frame
In this step you can define one or more measurement frames. Only the area within the measure-
ment frames gets analyzed. You can also define how the analysis treats objects that are cut by the
border of the image or the frame.
Parameter Description
Interactive Activated: The measurement frame definition can be changed inter-
actively while the analysis setting is run with Start Interactive Analy-
sis.
Maximize Circle Only available for 2D settings and only active if you have defined pre-
cisely one circle.
Activated: Maximizes the drawn-in circle to the full image size. In the
case of rectangular images the circle is adjusted to the shorter side.
Center Circle Only available for 2D settings and only active if you have defined pre-
cisely one circle.
Activated: Centers the drawn-in circle to the full images size.
Mode Selects how the measurement frame should be applied. Note that this
behavior is only applied when running the analysis (interactively) and
not during the setup. The setup always uses the Cut at Frame mode.
The following modes are available:
- Inside Only Measures only those objects, that are lying completely within the
measurement frame. Objects that are touching the frame or are inter-
sected by the frame are not analyzed.
Parameter Description
- Cut at Frame Measures all objects that are lying within the measurement frame.
Objects that are intersected by the measurement frame are measured
precisely up to the measurement frame.
The following fields are only active if you have selected a drawn-in graphic element:
Left Sets the starting point for the frame on the X axis in pixels.
Top Sets the starting point for the frame on the Y axis in pixels.
Show Frame On Activated: Displays the frame on the image after the analysis has run.
Analyzed Image
In this step you can select the segmentation method that is applied and set parameters for the
segmentation of the objects that you want to measure. Note that during the setup of the analysis
(via Edit Image Analysis Setting), the segmentation is only performed on the area visible in the
viewport. If you enter the analysis wizard via Start Interactive Analysis, the image will be fully
segmented.
Parameter Description
Execute Activated: Sets the defined threshold values when the measurement
program is run.
Classes Selects the class for which you want to define the segmentation.
Ring Element Class This additional parameter is only available if you have selected the
Zone of Influence (ZOI) method.
- Ring Distance Distance from surface of the primary object. Negative values mean
that the ring starts at the defined distance within the primary object.
Parameter Description
ZOI Class This additional parameter is only available if you have selected the
Zone of Influence (ZOI) method.
- ZOI Width Allows you to set the distance of border of the ZOI from the border of
the ring, or the main object, respectively.
Info
3D Parameters
If you are setting up a 3D analysis, the available parameters are either grouped under 2D Pa-
rameters for general (2D optimized) parameters, or under 3D Parameters, if the parameters
are specifically optimized for 3D operations.
The visible parameters depend on the selected segmentation method. The following parameters
sections can be available:
Parameter Description
Smoothing Selects how to smooth the image before the threshold values are set.
The following methods are available:
- Lowpass Applies the Lowpass method. The lowpass filter compares the bright-
ness of each pixel to the brightness of its neighboring pixels. If a pixel
is brighter than its neighbors, the brightness of this pixel is reduced
Parameter Description
and the brightness of the neighboring pixels is increased. This sup-
presses sharp changes in brightness (i.e. contours) and leads to more
gradual changes in brightness.
- Gauss Applies the Gauss method. Each pixel is replaced by a weighted aver-
age of its neighbors. The weighting depends on the sigma value. The
Gaussian filter is particularly useful for contour enhancement, which is
very sensitive to noise. Using a Gaussian filter before finding contours
greatly improves the results.
- Median Applies the Median method. Each pixel is replaced by the median of
its neighbors. The number of neighboring pixels taken into account
depends on the size. In a set of values (in this case the pixel values
taken into account), the median is the value for which the number of
larger values is equal to the number of smaller values.
Parameter Description
Sharpen Select how to improve the sharpness by enhancing contrast at fine
structures and edges of the image before the threshold values are set.
The following methods are available:
Parameter Description
Strength Only visible, if you have selected Unsharp Masking.
Sets the strength of the Unsharp Masking. The higher the value se-
lected, the greater the extent to which small structures are enhanced.
Here you can define the threshold values for the selected class in the class list.
Parameter Description
Threshold Sets the brightness boundaries between which pixels are considered.
– Reset Clears the upper and lower thresholds. No pixels are considered.
Color Model Only visible if the image is a color image, see Color Model [} 949].
- RGB In RGB mode you can define the threshold values for the red, green
and blue color channels.
- HLS In HLS mode you can define the threshold values for hue, saturation
and lightness.
– Low Sets the lower threshold. Only pixel values above this value are con-
sidered. The range of possible values depends on the bit depth of the
image.
– High Sets the upper threshold. Only pixel values below this value are con-
sidered. The range of possible values depends on the bit depth of the
image.
– Invert Only pixels outside the threshold boundaries are considered, i.e. those
pixels below the lower threshold and above the higher threshold.
– Full Range Sets the lower threshold to 0 and the upper threshold to the highest
value (depending on bit depth). The entire range of pixel values is
considered.
Histogram In the histogram you can change the lower and upper threshold value
for the activated value. Drag the lower or upper adjustment handle or
shift the entire highlighted area between the lower and upper thresh-
old value.
Click Click in the image on the regions that you want to define as objects.
The threshold values are adapted according to the pixel intensities at
the clicked position in the image.
Automatic The threshold values are determined automatically from the his-
togram. During setup only the part of the image displayed in the
viewport is taken for the calculation of the threshold. After the auto-
matic calculation of the threshold values you can further modify the
threshold values found interactively by selecting Click for threshold
value definition.
Parameter Description
- + Enables you to expand the currently segmented regions by the gray
values/colors of the objects subsequently clicked on.
- Otsu The pixel values below the threshold are designated as background
and those above the threshold as foreground. It iterates through all
possible threshold values and calculates the variance of the pixel in-
tensities of the background and foreground pixels for each value. The
threshold is set at the value that minimizes both variances. This
method is particularly suited to light objects on a dark background.
- Iso-Data The pixel values below the threshold are designated as background
and those above the threshold as foreground. An initial threshold
value is chosen, and the mean pixel intensity of the foreground and
background pixels is calculated. These two mean values are averaged
and the result serves as the input threshold for the next calculation.
The process is repeated until the threshold value no longer changes.
Parameter Description
- Triangle The algorithm constructs a line between the peak of the highest fre-
Threshold quency pixel intensity and the lowest pixel intensity. The distance be-
tween the line and the histogram is computed for all values along the
line. The pixel intensity where the line is longest is used as the thresh-
old. This method is particularly suited when the foreground pixels only
have a weak peak in the histogram.
- Three Sigma Calculates the pixel value that occurs most frequently. The standard
Threshold deviation of the values in the peak is calculated. The threshold is set
to the pixel intensity that is the sum of the average peak value and
three times the standard deviation.
RGB
Here you can set the RGB channel threshold values.
Parameter Description
Activates the red channel in the Expander Histogram.
Red
HLS
Here you can set the hue, lightness and saturation threshold values.
Parameter Description
Activates the hue in the Expander Histogram.
Hue
Parameter Description
Kernel Size Sets the kernel size used to calculate the variance value of one pixel
with its neighboring pixels.
Variance Defines the lower and upper threshold for the variance.
This section is only visible if Intellesis Trainable Class Segmenter or AI Instance Segmenta-
tion is selected.
Parameter Description
Model Name Displays the name of the currently selected model.
– Select Model Opens the dialog to select a segmentation model. Note that you can
only use models trained on a single channel.
Model Class Displays the name of the currently used model class.
Min. Confidence Sets the minimum value (in %) for the confidence that a certain pixel
belongs to the segmented class. The default value is 51.
Parameter Description
Model Name Selects a model.
Only models trained on one channel images are shown here because
only those can be used to segment a specific class assigned to a spe-
cific channel.
Parameter Description
Model Version Only visible for AI Instance Segmentation.
Selects the version of the model.
Parameter Description
Subtract BG Only visible if Segmentation with Background Subtraction is se-
lected.
Selects which kind of background subtraction is performed.
Parameter Description
Min. Object Size Sets the minimum size in pixels that an object must have in order to
be segmented.
Min. Hole Size Sets the minimum size in pixels that a hole must have in order to be
recognized for segmentation. This input is synchronized with the in-
put for Min. Object Size, which must not be smaller than Min. Hole
Size.
Fill all Holes Specifies how holes in detected objects are treated.
1 2
Parameter Description
Binary Selects which morphological operations are performed on the seg-
mented (binary) image.
Parameter Description
- Open Performs first erosion and then dilation. The effect is smoothing and
removing of isolated pixels.
- Close Performs first dilation and then erosion. The effect is smoothing of
the objects and filling of small holes.
- Dilate Enlarges the boundaries of segmented regions. Areas grow in size and
holes within the regions become smaller.
- Erode Erodes boundaries of the segmented regions. The areas shrink in size
and holes within the areas become larger.
Count Sets how often the selected binary operation is performed with the
slider or input field.
Parameter Description
Separate Selects whether you want to process the image further after segmen-
tation. Objects that are touching one another can be separated using
different methods.
- Morphology Separates objects by first reducing and then enlarging them, making
sure that once objects have been separated they do not merge to-
gether again.
- Watersheds Separates objects that are roughly the same shape. The result is two
shapes separated by a thin 1-pixel boundary. The rest of the shape
perimeter remains unchanged. This method may however result in the
splitting of elongated objects.
Count Sets the count value, which is similar to a Sigma for Gauss applied to
a binary image.
Parameter Description
Suppress Invalid Activated: Discards invalid pixels at the border of the image.
Only visible if you set up a 3D analysis setting. With this section you can manipulate the preview
in 3D.
Parameter Description
Z-Position Selects the z-position around which the preview is shown.
Parameter Description
Source Selects the segmentation source. Depending on the selected seg-
mentation source, the functionalities in the Image Analysis
Wizard change accordingly.
- From Image Chan- Uses the channel of the multi-channel image defined in the
nel Classes step for segmentation of the selected region class.
- Remaining to Takes the pixels within the measurement frame that are not as-
Frame signed to any other class. The resulting regions are not displayed
to improve performance, but the results are contained in the ta-
ble and chart.
- Take from Parent Takes the regions that fulfill a certain condition, defined in the
Regions Region Filter step, from the parent regions.
Note: This does not work for intensity features as region filter.
This option is used to distribute objects from one class into sev-
eral subclasses with the use of region filters. A parent class can
be segmented with any method and refined via region filters. Re-
gion filters in the subclasses define the criteria of objects to in-
clude in this subclass. Each object belongs to the first subclass of
the source where it fulfills all requirements. A subclass without
region filters takes all objects from the parent class. Objects that
are not contained in any of the subclasses are excluded from the
analysis.
- Segment by Global A global threshold is applied to the image channel for image seg-
Thresholding mentation.
Parameter Description
- Intellesis Trainable Uses machine-learning algorithms to segment the selected class
Class Segmenter by applying a trained Intellesis model or an imported neural net-
work (e.g. trained on arivis Cloud).
- AI Instance Seg- Uses an AI model for instance segmentation. You need the
mentation Docker Desktop software running on your PC and have a suitable
model available, see also Downloading AI Models [} 71].
See also
2 Automatic Segmentation [} 944]
Info
Settings based on image view
Note that during the setup of the analysis (via Edit Image Analysis Setting), the segmenta-
tion is only performed on the area visible in the viewport. Therefore, if you adapt the region fil-
ter by clicking on the objects displayed in the viewport, and select objects that are cut by the
current viewport, these objects will only be segmented partially. The region filter will only be
adapted based on the part of the object that is in the viewport. In case you want to select ob-
jects that exceed the viewport, we suggest to adapt the region filter values manually.
If you enter the analysis wizard via Start Interactive Analysis, the image will be fully seg-
mented during the segmentation step, and therefore it is possible to adapt the region filters by
clicking on the objects in the image.
In this step you can define the region filter conditions under which you want an object to be mea-
sured.
Parameter Description
Execute Activated: Uses the region filters when the measurement program is
run.
Interactive Activated: The region filters can be changed while the analysis set-
ting is run with Start Interactive Analysis.
Classes Selects the class for which you want to define the conditions.
Region Filters If you have defined one or more blocks with region filters in the Re-
gion Filter Editor, you can select the block for which you want to
set the filter.
Select the relevant block and set the maximum/minimum values either
by clicking on the objects in the image you want to include in the
measurement, or by entering the maximum/minimum values sepa-
rately.
Parameter Description
Reset Resets all settings for the conditions.
See also
2 Region Filter Editor [} 955]
All features in the list of selected features are calculated during image analysis. The results are dis-
played in the results table for all detected objects of the same class. The columns of the features
are sorted according to the order they appear in the Selected Features list.
The results of the settings you set here are displayed in the last step of the wizard in the table
with results on the right side.
For a detailed description of the individual features, see Measurement Features [} 440].
Parameter Description
Selected Features Displays the features that you have selected block by block. All fea-
tures in a block are And-linked, i.e. an object is only measured if the
values of each individual feature fall within the defined range.
Add Block Adds an Or block. If several Or blocks are defined, an object is mea-
sured if it meets the condition in at least one block.
Available Features Displays the list with all available features. Double-click on the feature
or to add the feature to the list of selected features on the left.
Adds the selected feature to the list of selected features on the left.
Add
In this step you can post-process the segmented objects interactively. You can modify the results
of the automated segmentation to analyze your image data. Note that this step is not available
for 3D Image Analysis. Also note that during the setup of the analysis (via Edit Image Analysis
Setting), the segmentation is only performed on the area visible in the viewport. If you enter the
analysis wizard via Start Interactive Analysis, the image will be fully segmented.
Parameter Description
Interactive Activated: The segmented objects can be post-processed interac-
tively while the analysis setting is run with Start Interactive Analy-
sis.
Parameter Description
Draw Enables you to draw new objects of the selected class.
Erase Enables you to erase parts of an object. While pressing the left mouse
button, outline the parts of the object that you want to erase. Right-
click to erase these parts of the object.
Cut Enables you to separate connected objects. While pressing the left
mouse button, draw in the separation line between the objects.
Right-click to cut the objects.
Merge Enables you to connect objects. While pressing the left mouse button,
outline the parts of the object that you want to merge. Right-click to
merge the objects.
Parameter Description
Enables you to add a line-object.
Polyline Region
Parameter Description
Mode
- Click on areas in the image you want to add to the selected object
class.
- Click on areas in the image you want to remove from the selected ob-
ject class.
Intensity Sets the tolerance value for the intensity. The tolerance value specifies
how much the intensity of a pixel may deviate from the average inten-
sity of the object in order to still "grow" to become part of the object.
Parameter Description
Region Filter Reapplies the region filter you defined in the previous step to the
post-processed image.
Parameter Description
Undo Undoes the last action.
9.3.17.6 Features
Info
Analysis Setting for Experiment Feedback
Features you have defined are available from within the Feedback Experiment. Any time you
change the image analysis settings, you need to reload the *.czias file in the feedback script
editor to activate the changes. The image analysis settings are typically saved in the Program-
Data folder on your hard drive, e.g. C:\ProgramData\Carl Zeiss\ZEN\Users\XXXX\Docu-
ments\Image Analysis Settings.
Parameter Description
Interactive Activated: The features can be changed interactively while the analy-
sis setting is run with Start Interactive Analysis.
Classes Selects the class for which you want to define measurement features.
For a 2D analysis setting, the tree view has two entries for each class
for which you can define features. The first entry ("Classes") concerns
the collection of the objects belonging to the class. The second entry
("Class") represents all individual objects belonging to the class. The
typical features of "Classes" are statistical values of the measurement
feature over all single objects of that class. For example, Mean Inten-
sity channel 1 gives you the mean intensity of all single objects. Area
gives you the sum of all areas (i.e. the total area) of the individual ob-
jects.
Fo a 3D analysis, each class has only one entry in the tree view for
which you can define features for individual regions of the class and
features for all regions of the respective class.
Features of Indi- Displays the selected features for the currently selected class and al-
vidual Regions/ lows you to select additional features. Depending on the class selec-
Features of All Re- tion, you can select and see the features for individual regions or all
gions regions.
- Feature Displays the feature name(s) added for the currently selected class.
- Display If you activate Display in the feature selection dialog for a feature,
the result of the measurement is displayed next to the corresponding
object in the analyzed image.
Annotations Only visible for 2D analysis settings and if in the tree view a "Class"-
node (i.e. the node concerning the individual regions) is selected.
Displays the selected annotations for the currently selected class and
allows you to select features which will be shown as annotations in
the analyzed image. An example is the center of each region.
- Color Activated: Allows you to select the color for the region annotations.
Parameter Description
Custom Feature Opens the editor to define a custom feature, see Creating Custom
Features [} 427].
See also
2 Workflow Experiment Feedback [} 290]
Parameter Description
Selected Features Displays all selected features that are calculated for the object during
image analysis.
- Display Activated: The value of the feature for each object is displayed in the
analyzed image.
- Copy Only visible for Classes (collection of objects) and if more than one
class exists.
Selects where the feature is copied to. If the Copy column is empty,
the selected feature is not copied to any result table. For a description
of the possible copy operations, see Copy Operations for Features
[} 433].
- Moves the currently selected feature one position down in the list.
Move Down
Available Features Displays the list with all available features. Double-click on the feature
to add the feature to the list of selected features on the left.
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
Parameter Description
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Geometric All features that describe unscaled geometric features are listed.
Features Un-
scaled
- Position Fea- All features that describe unscaled positions are listed are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
Adds the selected feature to the list of selected features on the left.
Add
Parameter Description
Selected Features Displays all selected features that are calculated for the object during
image analysis.
- Display Activated: The value of the feature for each object is displayed in the
analyzed image.
- Copy Only visible for Classes (collection of objects) and if more than one
class exists.
Selects where the feature is copied to. If the Copy column is empty,
the selected feature is not copied to any result table. For a description
of the possible copy operations, see Copy Operations for Features
[} 433].
- Moves the currently selected feature one position down in the list.
Move Down
Available Features Displays the list with all available features. Double-click on the feature
to add the feature to the list of selected features on the left.
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Geometric All features that describe unscaled geometric features are listed.
Features Un-
scaled
- Position Fea- All features that describe unscaled position features are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
Adds the selected feature to the list of selected features on the left.
Add
Parameter Description
Selected Features Displays all selected features that are calculated for the object during
image analysis.
- Display Activated: The value of the feature for each object is displayed in the
analyzed image.
- Copy Only visible for Classes (collection of objects) and if more than one
class exists.
Selects where the feature is copied to. If the Copy column is empty,
the selected feature is not copied to any result table. For a description
of the possible copy operations, see Copy Operations for Features
[} 433].
Parameter Description
- Moves the currently selected feature one position up in the list.
Move Up
- Moves the currently selected feature one position down in the list.
Move Down
Available Features Displays the list with all available features. Double-click on the feature
to add the feature to the list of selected features on the left.
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
Parameter Description
Custom Features Displays a list of the created custom features.
– Adds a new custom feature that can be defined with the options on
the right side.
Add
Define Custom
Feature
– Unit Specifies the unit of the feature as free text input. This input is op-
tional.
Define Operands Displays and defines the operands used for the calculation of the cus-
tom feature.
– Class Selects the class that is used for the definition of the operand. For a
single region class, also the classes of the children can be selected.
– Feature Displays all available predefined measurement features for the se-
lected class and selects which measurement feature should be used
for the definition of the current operand. If you activate the checkbox
behind the selection dropdown, the selected feature is also set visible
in the result table.
Parameter Description
– Expression Displays the expression of the Operand(s) defined by the Class and
Feature selection.
Define Custom Ex- Defines the mathematical calculation of the feature, using the Oper-
pression ands and mathematical operators, e.g. 100*(a/b+Math.Pow(c,2)).
– Abs Adds the mathematical operator to return the absolute number, i.e.
non negative values.
See also
2 Creating Custom Features [} 427]
2 Examples for Custom Features [} 438]
9.3.17.7 Statistics
In this step you can define custom statistical features for your regions or objects. Note that this
step is not available for 3D Image Analysis. When an image analysis using custom statistical fea-
tures is successfully performed, the resulting values are available in the heatmap graph when the
base-node of the classes tree is selected and if well plate or sample carrier information is present.
Parameter Description
Interactive Activated: The custom feature definition can be changed interac-
tively while the analysis setting is run with Start Interactive Analy-
sis.
Classes List Selects the class for which you want to define the custom statistical
feature(s).
Parameter Description
Define Custom Opens the editor to define a custom statistic feature, see Creating
Feature Custom Statistical Features [} 428].
Parameter Description
Custom Features Displays a list of the created custom statistical features.
– Adds a new custom feature that can be defined with the options on
the right side.
Add
Define Custom
Feature
– Unit Specifies the unit of the feature as free text input. This input is op-
tional.
Define Operands Displays and defines the operands used for the calculation of the cus-
tom statistical feature.
– Class Selects the class that is used for the definition of the operand.
– Feature Displays all available measurement features for the selected class and
selects which measurement feature (if any) should be used for the
definition of the current operand. If you activate the checkbox behind
the selection dropdown, the selected feature is also set visible in the
result table.
– Statistical Op- Selects the statistical operation used for the current operand, e.g.
eration MEAN, MIN, MAX, SUM and COUNT.
Define Custom Ex- Defines the mathematical calculation of the feature, using the Oper-
pression ands and mathematical operators, e.g. 100*(a/b+Math.Pow(c,2)).
Parameter Description
– + Adds the mathematical operator for summation to the calculation.
– Abs Adds the mathematical operator to return the absolute number, i.e.
non negative values.
See also
2 Creating Custom Statistical Features [} 428]
2 Examples for Custom Statistical Features [} 439]
In this step you see a preview of the measurement result. The results table contains only the mea-
surements performed in the current view port. These results may differ from the actual results
when the complete image analysis is performed. This increases the performance during the setup.
The results in the table depend on the settings you made in the Feature step. The table contains
all selected features for the highlighted class/classes. Click on a row of the table to highlight the
corresponding object in the image or vice versa.
The measured image is displayed in the Analysis View [} 429].
Parameter Description
Classes Here you can select the class for which you want to see the measured
features.
For a 2D analysis setting, there are two entries for each class: The
"parent" class, which shows the features for all objects together, and
the "child" class, which shows the features for each individual object.
For a 3D analysis setting, there is only one entry per class.
Highlight Box
- Color Allows you to set the color of the highlight box surrounding the se-
lected object in the image.
- Line Width Allows you to set the line width of the highlight box around the se-
lected object in the image.
Parameter Description
Enable Chart Only visible for 2D image analysis settings.
Activated: Displays a chart with measurement results in the Analysis
View by default.
Deactivated: Displays no chart in the Analysis View by default.
Y-Axis Only visible for 2D image analysis settings and only available for the
scatter chart.
Selects the default feature that is displayed on the y-axis of the chart
in the Analysis View.
The elements displayed in the drop down menu depend on the previ-
ously defined measurement features.
Multiple Scenes Only visible for 2D image analysis settings and only available for multi-
scene images.
Activated: Sets as the default chart a histogram/ scatter chart that
contains the data points of all scenes.
Time Series Only visible for 2D image analysis settings and only available for time
series images.
Activated: Sets as the default a chart that displays the analysis results
of the selected class in a time series chart.
Heatmap Only visible for 2D image analysis settings and only available for ex-
periments with multi well/ multi chamber plates (multiple scenes).
Activated: Sets as the default chart a heatmap of the well plate.
Finish Saves the created analysis setting and ends the wizard.
Parameter Description
List of opened im- Here you find a list of all images and documents which are currently
ages and docu- opened in the Center Screen Area. The disk symbol with a small
ments
warning sign means that you changed and/or have not saved the
chosen image or document.
Saves the chosen file. Save as dialog will open if you have not saved
the file yet.
Save
Parameter Description
Automatically exports the active image with the default settings of
Quick Export the Image Export processing function.
Time series or z-stacks images are automatically exported with the de-
fault settings of the Movie Export processing function.
Only visible, if the Enable Imaging Setup checkbox in the Tools > Options > Acquisition >
Acquisition Tab is activated.
View and adjust the hardware parameters used for confocal (LSM) or camera (WF) experiments.
All hardware parameters set to detect one or more specific signals simultaneously are defined as
one track. Specific tracks can be combined for image acquisition which is then called a multitrack
acquisition.
Info
Note that the Microscope Control tool on the Locate tab has a similar appearance and simi-
lar control elements but its function differs from the Imaging Setup tool described here.
If there is no track configured, you need to add a first track to the experiment by clicking on the
drop down button. To see a list of the availalbe acquisition modes, click on the arrow on the right
side of the button.
By selecting a WF track you can see the graphical display of the acquisition light path with various
icons. The arrangement of the icons represents the typical set-up of the microscope components
configured on your system. For a description of the most common icons, read Reflected/Transmit-
ted Light Path.
The associated hardware settings are shown above the icons and can be changed here. To
change the relevant hardware settings, left-click on the icons. In the shortcut menus you will see
numerous selection and setting options for adjusting your settings.
Info
Any change you make is automatically adopted and written to the corresponding hardware
setting of the experiment. If you want to undo these changes, do not save the experiment. In-
stead, reload the experiment in the Experiment Manager.
If you change the hardware settings in this section, please bear the following points in mind:
§ If the checkbox Include in this setting is activated the component is activated. Activated
components are included into the hardware settings of the experiment and subsequently ap-
plied in the experiment. Activated components are highlighted in blue color.
§ Components with a deactivated checkbox are not adopted into the hardware settings of the
experiment and are not subsequently applied in the experiment. These components are dis-
played with a grayed-out icon.
§ Components with a filled-in checkbox and a triangle underneath are only partially adopted
into the hardware settings of the experiment and subsequently applied in the experiment. To
show the sub-components, click on the triangle under the checkbox. To adopt the sub-com-
ponents into the hardware settings of the experiment and subsequently apply them in the ex-
periment, activate the relevant checkboxes for the sub-components.
Only visible, if the Enable Advanced Imaging Setup checkbox in the Tools menu | Options |
Acquisition | Acquisition Tab | Enable Imaging Setup is activated.
There is a switch on top of the tool. By clicking on the button you can switch from the Standard
to the Advanced Imaging Setup.
Parameter Description
Before/After Experi- Shows the name of the hardware setting that will be applied im-
ment mediately before or after the experiment.
Options Opens the Options menu for the specific hardware setting.
Previous/Next The buttons allow you to navigate through the various hardware
settings.
Parameter Description
Clear all unused hard- By clicking on the Clear button you can delete all unused hard-
ware settings from ex- ware settings from your experiment.
periment
All available hardware Here you can see all available hardware settings.
settings in experiment
For the Lattice Lightsheet, the following specific settings can be made in the Imaging Setup tool:
Parameter Description
Transmitted Light Opens the selection of transmitted light source (white or red LED).
Define and control parameters for temperature, atmosphere and the Y-Module. The available pa-
rameters depend on which components you have configured on your system.
Info
The symbols behind measured values indicate if the measured and the set values are
4 the same = green check mark,
4 different = red or blue triangle with exclamation point, or
4 not activated = blue circle with question mark.
Parameter Description
Temperature Here you can control up to 4 independent heating channels that are
linked to certain devices (e.g. incubator XL, heating insert P, objective
heater etc.). The devices are assigned to different channels in the Mi-
cro Tool Box (MTB).
Parameter Description
Atmosphere Here you can define the O2 and CO2 concentration, as well as the
temperature for an Air Heater module. Note that the meaning of the
symbols behind the measurement values is the same like described
above.
- Air Heater Activated: The air heater will be used for the experiment.
Under Setpoint you can set the temperature of the air heater in °C.
Under Measured you see the currently measured value.
§ Fan Speed:
Sets the rotation speed of the fan.
Y-Module The Y-Module panel allows setting the temperature for two indepen-
dent modules (thermostats).
- Selected Here you can select which module you want to control (Module 1 or
Module 2).
- Circulator 1-2 Activated: The channel will be used for the experiment.
Under Setpoint you can set the temperature of the channel in °C.
Under Measured you see the currently measured value.
For each module two circulator channels can be activated.
Parameter Description
Input Image Input image with file name and path to the target folder.
Set Input Auto- Activated: The currently active image is automatically loaded as input
matically image. This can be used to load the output image of the first image
processing step as input for the next image processing step. Turn this
function off if you want to test various processing parameters and
compare the results.
Parameter Description
Apply to preview, If the preview option in the Output tool is selected, activating this pa-
only rameter will cause the processing function to be applied to the pre-
view-region only and leaves the parts of the image outside the pre-
view region unchanged.
Switch to Output Activated: Automatically switches the view to the processed image.
Remain at current Activated: The currently active image document will remain active af-
view ter the image processing step.
This tool allows to bleach interactively during a Continuous scan or during a Time Series experi-
ment while image acquisition is performed. The bleach region is determined by pointing the
mouse onto the position in the image.
Parameter Description
Mouse Behavior Change the display of the selected clipping plane using the dropdown
list to the right of the Activate checkbox. The following settings are
available:
- Bleach while The bleaching process is continued while the mouse button is
mouse button pressed.
pressed
Parameter Description
- Bleach fixed The bleaching process is continued for a fixed number of times after
number of it- the mouse is pressed. The number of iterations can be entered below.
erations
- Allow cancel If activated, the bleaching process can be stopped before the fixed
by click number iterations is accomplished. Simply click on the left mouse but-
ton again to stop the bleaching process.
Bleaching Stencils The available bleaching stencils (equivalent to ROIs for bleaching) are
list listed here.
A stencil is displayed in its graphical form, the color displays the laser
line that is assigned for bleaching to this stencil.
You can add a stencil by clicking the + Create from graphic button.
This will import an activated graphical element from the Graphics
tool tab.
A stencil can be saved, loaded or deleted.
A stencil is active, when clicked on and highlighted.
To use an activated stencil for bleaching, move the mouse cursor onto
the image and click on the left mouse button.
Laser Settings for Here you can set the laser line and laser power for the activated sten-
section cil.
Parameter Description
Feature Set
– Feature Set Selects and loads previously saved feature definitions/feature sets. If
Dropdown you have made changes to a feature definition, the name of the fea-
ture selection is marked with an asterisk (*). If you close the applica-
tion without saving a changed feature selection, you will be asked
whether you want to save the changes.
– Define Opens the Feature Selection dialog to define the features that are
available for interactive measurements.
– Feature Sub- Selects and loads previously saved definitions of subsets. If you have
set Dropdown made changes to a subset definition, the name of the feature subset
is marked with an asterisk (*). If you close the application without sav-
ing a changed feature subset, you will be asked whether you want to
save the changes.
Parameter Description
– Opens the options menu to create, import, export, save or delete a
feature subset definition.
Options
– Define Opens the Feature Subset Definition dialog to define which fea-
tures are available for the definition of the feature set.
– Define Opens the dialog to define the sequence of measurements that you
want to execute interactively.
Create Measure- Creates a measurement data table and opens it as a separate docu-
ment Table ment. This contains the measurement data from the Measure view of
the current image.
See also
2 Using Interactive Measurements [} 248]
In this dialog you can specify which features are measured with the available graphical elements.
Line Displays graphical elements you can use to measure a single distance.
Lines Displays graphical elements you can use to measure several distances
at once.
Parameter Description
Name Displays the name of the feature.
Features Section
This section displays a list with all the features that are available for measuring the selected graph-
ical element. For a description of individual measurement features, see Measurement Features
[} 440].
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
Parameter Description
- Geometric All features that describe unscaled geometric features.
Features Un-
scaled
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
In this dialog you can specify which features are available in the Feature Selection dialog by acti-
vating the checkbox in front of the features. A right click menu offers the possibility to select or
unselect all features.
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
In this dialog you can define an interactive measurement procedure. You can specify the order in
which the individual graphic elements should be drawn in and which measurement parameters
you want to have calculated for them.
Line Displays graphical elements you can use to measure a single distance.
Lines Displays graphical elements you can use to measure several distances
at once.
Points Displays graphical elements you can use to count various events in an
image.
Double click on an element to select it and add it to the Selected Elements Sequence list.
Parameter Description
Deletes the selected feature.
Delete
Features Section
This section displays a list with all the features that you can measure with the graphical element
activated in the Available Elements section. For a description of individual measurement fea-
tures, see Measurement Features [} 440].
Parameter Description
Search Features Here you can enter parts of the name of the feature that you are
looking for. The features in which the entered character string occurs
are listed.
Select a type of feature according to which you want the features to
be filtered from the dropdown list.
- Intensity Fea- All features that analyze intensity values are listed.
tures
- Image Fea- All features that contain meta information about the measured image
tures are listed.
- Position Fea- All features that describe the position are listed.
tures
- Position Fea- All features that describe unscaled positions are listed.
tures Unscaled
- Statistical Fea- All features that can be used for plotting in a heatmap (i.e. that pro-
tures vide statistical values suitable for heatmap plotting) are listed.
With this dialog you can execute your previously defined sequence for interactive measurement.
You can draw the graphical elements into the image. You also have the standard controls of the
Dimensions and Display tabs to adapt the image display and change to other image dimensions
to draw the elements.
The software offers the possibility to use a magnetic cursor in your image. This cursor detects
edges/contrast changes and automatically moves to them, which can help you to add measure-
ments or annotations. Note that this cursor only works reliably on one channel, so for multi-chan-
nel images only activate one channel in the view options. The magnetic cursor can be activated
and deactivated via the right-click context menu of your image, or with the shortcut Alt + C.
Parameter Description
Operation
– Start Starts the drawing mode to add the sequence of graphical elements
to the image.
– Pause/Con- Pauses the drawing. This allows you to modify graphical elements
tinue that have already been drawn. The button changes to Continue for
resuming the drawing of the elements.
Parameter Description
– Stop Ends the drawing mode of the measurement sequence.
Measurement Se- Displays the graphical elements of the current measurement sequence
quence in the predefined order.
Measurement Data Displays the values measured with the graphical elements. A right
Table click in the header of a column opens a context menu for the respec-
tive column.
– Sort Data Opens a dialog to define how to sort the data in the table.
– Filter Data Displays a field to define criteria for filtering the data in the table.
– Name Activated: Displays the column with the name of the graphical ele-
ments.
– Feature Activated: Displays the column with the name of the measurement
feature used for the respective element.
– Unit Activated: Displays the column with the unit of the measurement.
Cancel Cancels the interactive measurement without saving the graphical ele-
ments.
Parameter Description
Record Records a new macro. The Record button will change to a Stop but-
ton. Press this Stop button to stop the macro recording.
Parameter Description
– Deletes the selected macro.
Delete
– With the options dropdown you can create, duplicate, rename, and
safe new macro files, or delete existing macros.
Options
Preview Here you see a preview to the macro program code of the selected
macro. Editing the macro here is not possible.
Properties
– Keywords Displays keywords for the selected macro. Keywords can also be en-
tered in this text field.
– Description Displays the description for the selected macro. A description can also
be entered in this text field.
– Toolbar Con- When you click on the button you will enter the Customize Toolbar
figuration dialog. There you can add macro buttons or functions to the toolbar
for a quick access. How you can configure the toolbar is described un-
der Customizing Toolbar [} 39].
See also
2 Right Tool Area [} 28]
The configuration of your system according to your MicroToolBox (MTB) configuration setting is
shown here. A valid microscope configuration has to be created first using the MTB2011 Configu-
ration program. The light path follows the light starting with the light source to the specimen and
from there to the camera or eyepiece. It displays control elements for all motorized and manually
operated components. Here you can interactively adjust the microscope and its components.
CAUTION
Risk of Glare
There is a risk of glare when changing positions of the beam splitter wheel, excitation filter
wheel or emission filter wheel in the Microscope Control tool on Locate tab, if available. Ex-
citation light can unintentionally be directed to and emitted through the ocular if an inappro-
priate combination of positions is selected.
4 Do not look into the ocular when changing settings on the light path.
4 We recommend to use virtual reflector revolver settings instead of changing positions via
the mentioned components.
Info
If you are not using any motorized components, you will have to make the relevant adjust-
ments manually.
Keep the following points in mind when working with this tool:
Shutter Here you can set the shutter to Open or Closed. The status is
displayed in text form above the icon.
/
Reflector Here you can select one of the configured filter cubes for re-
Turret flected light techniques from the list.
Stage Here you see the options for Stage Control and Focus Control.
§ Stage >>: Opens the Stage Tool [} 985] in the Right Tool
Area.
There you can move the microscope stage virtually with the
help of a software joystick or by entering absolute coordi-
nates. You can also calibrate the stage within that tool.
§ Focus >>: Opens the Focus Tool [} 938] in the Right Tool
Area.
There you can move the focus drive virtually with the help of a
software joystick or by entering absolute coordinates. You can
also calibrate the focus drive within that tool.
Aperture Adjust the diaphragm opening (0% to 100%) using the slider or
Di- spin box/input field.
aphragm
Filter Here you can enter the first neutral density filter (e.g. 0.4%, 6%,
Wheel 100%, 100%) that you require.
Camera/ Select whether you want to direct the light to the camera only
Eyepiece (100% Camera), to the camera and the eyepiece (30% Eye-
Switch piece/70% Camera) or to the eyepiece only (100% Eyepiece)
from the list .
Reflected If your microscope has a halogen lamp for both reflected and
Light/ transmitted light illumination, here you can select whether you
Transmit- want to control the halogen lamp for reflected light illumination
ted Light or the halogen lamp for transmitted light illumination.
Switch
6x Motor- This device is part of the Motorized Dual Filter Wheel. Select the
ized desired Dichroic position from the dialog. Switching time is
Beam about 300 msec between neighboring positions.
Splitter
Wheel
6x Motor- This device is part of the Motorized Dual Filter Wheel. Select the
ized Emis- desired Emission Filter position from the dialog. Switching time is
sion Filter between 60 and 240 msec between neighboring positions (de-
Wheel pending on the speed configuration in the MTB2011 Configura-
tion program).
6x Motor- Select the desired Excitation Filter position from the dialog.
ized Exci- Switching time is between 70 and 300 msec between neighbor-
tation Fil- ing positions (depending on the speed configuration in the
ter Wheel MTB2011 Configuration program).
12x Mo- If a Motorized Dual Filter Wheel and a Motorized Emission Filter
torized Wheel is present, up to 12 virtual Reflektor positions can be con-
vReflector figured in the MTB2011 Configuration program. Select the de-
Changer sired filter combination from the list of available positions. This is
more convenient than adjusting excitation, dichroic and emission
filters individually.
Parameter Description
Contrast Manager
Mode Select the setting for the contrast mode from the Mode dropdown
list.
- Off The Contrast Manager is not used. All settings must be made manu-
ally or via a settings file.
- On Demand The function of the Contrast Manager is activated via the touch
screen on the microscope.
- Contrast Retain- If core components (e.g. condenser, reflector, shutter) for a certain
ing contrast technique are changed, dependent components are also
changed accordingly.
Method Select one of the available methods for the contrast mode here.
Parameter Description
Light Manager
Enabled Activated: Activates the Light Manager. Activates the Mode drop-
down list in the Light Manager.
Mode Select a setting for adjusting the brightness of the light here.
- Objective Adjusts the brightness of the light via the lamp voltage. The color
temperature changes accordingly.
- Classic Adjusts the brightness on the basis of the available filter wheels. The
color temperature is retained. Only if the brightness adjustment can-
not be achieved via the filter wheels does adjustment take place via
the lamp voltage.
Parameter Description
Objective List Here you can easily switch between the objectives and pre-magnifica-
tion. The color bar on the objective buttons indicates the color for the
respective stage limit indicator inside the Navigation tab.
If you select autocorr objectives (motorized correction collar) you can
additionally adjust the relevant settings like Correction Mode, Bot-
tom Thickness or Imaging Depth.
NOTICE
Risk of Memory Depletion
By default, the movie recorder acquires images with the highest possible speed, which might
fill available memory and lead to a crash. To avoid this, limit the acquisition speed.
4 Go to Tools > Options > Acquisition > Camera/Live and activate Show Camera Ex-
pert Options. Then click OK.
ð The dialog closes and the setting is saved.
4 Go to the Locate tab, open the Camera tool and expand the Camera Specific section.
ð The camera specific settings, including expert settings are displayed.
4 Change the value for Frame Time which sets a waiting time between image acquisitions
within the camera.
ð You have adapted the acquisition speed.
Info
Saving the Data
If you want to save the recorded data and open it in other applications (e.g. ImageJ), it needs
to be decoded. To do this, you have to save the data via File > Save As with Options, or ex-
port the data with the dedicated image/movie export functionality in ZEN.
This tool enables you to acquire image sequences in the form of videos using the camera's fastest
burst mode. To play the acquired movie in ZEN, use the Player tab in the Center Screen Area
(only visible in Show All mode).
Parameter Description
Start Movie Starts the acquisition. The button changes into the Pause button. A
Stop button appears in the window above the button.
Pause Movie Pauses the acquisition. The button changes into the Continue button.
Continue Movie Continues acquisition if it has been paused. The button changes into
the Pause button.
Parameter Description
Output Image Normally this windows is empty as the output image does not yet ex-
ist. If the Overwrite option is activated, you can select an already ex-
isting output image here so that the output is overwritten every time
the function is applied. This prevents multiple output images to be
created when testing out function parameters.
Parameter Description
Create New Out- Activated: Saves the image file with a new name.
put
Naming Opens a dialog to define conventions for the image file name.
Preview Displays a preview window in your image. In this window you can see
an automatically calculated preview of what the result will look like
with the current parameter settings.
See also
2 Image Processing Workflow [} 72]
This tool enables you to send an image to the arivis Pro and open it there with an analysis pipeline
selected in ZEN. The input image for the tool is always the currently opened image. A description
of each pipeline is displayed as the tooltip of the respective pipeline.
Parameter Description
Options
– Add Pipeline Opens a browser to select and add a folder with pipelines you created
Directory in arivis Pro to the list of directories below.
– Remove Pipe- Removes the selected pipeline directory in the list below.
line Directory Note: You can only remove whole directories/top level folders which
you have imported. Removing subfolders or individual pipelines from
the list is not possible!
Open and Apply Opens arivis Pro with the current image and the selected pipeline.
This tool enables you to navigate the sample in a microscope equipped with a motorized stage.
You can use the Navigation Circle (software joystick) to move the stage or enter the coordinates
directly.
CAUTION
Risk of Crushing Fingers
The drive of a microscope stage with a motorized horizontal stage axis (stage drive) is strong
enough to crush fingers or objects between the stage and nearby objects (e.g. a wall).
4 Remove your fingers or any objects from the danger area before moving the stage drive.
4 Release the joystick immediately to stop the movement.
NOTICE
Risk of Spilling the Sample
If you use the sample carrier 60mm petri dish for Celldiscoverer, you risk spilling if the stage
speed and acceleration exceed 50%. On initial selection of this sample carrier, both Speed and
Acceleration are automatically set to 50% and a warning message is displayed.
4 Do not increase stage speed and acceleration above 50% for the 60mm petri dish.
Info
You can also control the Navigation Circle and thus the motorized stage with the keyboard.
To activate keyboard control left-click anywhere inside the segmented Navigation Circle. To
change between the two speed modes, right-click the central Navigation Circle icon.
4 To move the stage at the lower speed, use the arrow keys (diagonal movements are also
possible).
4 To move the stage at the higher speed, use Shift + Arrow keys.
Parameter Description
Navigation Circle Enables you to move the stage freely in the X and Y direction and in
both diagonal directions.
To move the stage, drag the Navigation Circle icon in the desired di-
rection. If released, the icon snaps back to the Navigation Circle cen-
ter and the stage stops.
The Navigation Circle allows four speeds:
§ Normal modes:
– Inner segments: Slow
– Outer segments: Medium
§ High-speed modes:
– Inner segments: Fast
– Outer segments: Very Fast
Parameter Description
Stop Stops any stage movement immediately. Use this button if you en-
tered a X-Position and/or Y-Position and wish to interrupt the stage
movement immediately (e.g. to prevent a collision).
X-Position, Y-Posi- Specifies the target coordinates for the stage movement.
tion The stage starts moving immediately after the coordinates have been
entered and confirmed; either by pressing the Return key or by click-
ing anywhere outside the current input field.
Speed Sets the speed for stage movement in percent (100% = maximum
possible speed).
Note that the speed setting does not change the speed graduation of
the SW joystick.
Acceleration Sets the acceleration of the stage in percent (100% = maximum accel-
eration value).
X/Y-Position
- Set Zero Sets the current position as the new zero point for the x/y coordi-
nates.
Marks This section displays a list where you can define X and Y positions
(optional z value), so called marks. The marks can also be defined via
the Marks button on the Dimensions tab. It is also possible to im-
port an already existing list of positions (e.g. defined with the Tiles
tool).
- Name Displays and sets the name for the respective entry. To add or edit the
name, double click into the column.
- X/Y/Z (µm) Displays the x and y position of the respective stage mark. If you have
activated Z-Values, also the value for z is displayed.
Parameter Description
- Moves the selected entry one position down in the list.
Move Down
- Go Previous Navigates the stage to the position of the previous list entry.
- Go Next Navigates the stage to the position of the next list entry.
If you see this dialog, after you have started the software and the hardware was initialized, you
should consider to calibrate the stage and focus drive immediately.
The calibration is necessary, if
CAUTION
Risk of Crushing Fingers !
The drive of a microscope stage with a motorized vertical axis (focus drive) is strong enough to
crush fingers or objects between the stage and the microscope stand.
4 Before starting the calibration procedure, ensure that people stand clear of the instrument
and that the full travel range is not obstructed by any objects.
If you skip the calibration, you can calibrate the stage and focus drive afterwards within the
Stage Control and Focus Control dialogs accessible via the Lightpath tool, see Stage Control
and Focus Control. Make sure that the Show All mode is activated, to see the Calibrate button
within the dialogs.
Note that for fully automated system like Axio Scan the axes are calibrated automatically. The cali-
bration is not necessary in that case.
This tool permits setting the bleach parameters for a bleaching experiment in combination with a
time series. Bleaching or photo manipulation is done in between acquisition.
Parameter Description
Start after # of im- Activated: Enables you to set the number of frames that are imaged
ages before the bleaching process.
Repeat after # of Activated: Enables you to set the number of images that will sepa-
images rate a repetitive bleaching procedure.
- Spot Bleach This function is only available with LSM 980. When spots are defined
Duration for bleaching enter the time for each spot to be bleached in this edit-
ing field.
- Trigger In The bleach event is waiting for a TriggerIN signal (TTL pulse) to be ex-
ecuted.
- Trigger Out The system outputs a trigger signal (TTL Pulse) for each bleach event.
Parameter Description
Excitation of
Bleach
- Use different Activated: For each previously drawn ROI a tab is present, in which
settings for laser line and laser power can be chosen.
different re-
gions
All Regions Only visible if the Use different settings for different regions is
not activated.
Activates the according checkbox for a laser line. Use the slider to ad-
just the power for bleaching/photo-manipulation. Note that for high
laser power the High Intensity Laser Range must be set in the
Imaging Setup or Channels tool (only for LSM 900).
The software offers different image views. The general image views are visible for each image.
The specific image views are available only if the image has the appropriate characteristics (e.g.
multiple channels, Z-stack, etc.). Each image view has general and specific controls which you can
use to work with the view.
These image views are available with any image. Depending on the type of image in question, the
general control elements may have additional or more limited functions.
9.4.1.1 2D View
This is the default view to display images in the software. Here you can adjust how the image is
displayed with the general view option tabs on the bottom. Additionally, you can add graphical
elements like annotations and measurements. A right click into the image area opens a context
menu offering quick options to adjust the view, see 2D View Context Menu [} 990]. If you have
drawn graphical elements into your image, a right click on such an element opens a separate con-
text menu to modify the selected element(s), see Graphical Elements Context Menu [} 992].
See also
2 General View Options [} 1029]
2 Center Screen Area [} 26]
This context menu is displayed if you right click in the image displayed in the 2D view. Note that if
you right click a graphical element or within the bounding box of a graphical element drawn into
your image, a different context menu is displayed, see Graphical Elements Context Menu
[} 992].
Zoom Group Here you have access to the main zoom functions, see also
Zoom Section [} 1030].
Rulers Activated: Displays rulers at the top and left edge of the im-
age.
Show Floating Activated: Displays a scale bar, which you can position Alt + S
Scale Bar freely in your image.
Draw Annota- Adds an annotation scale bar to the image and lists it in the
tion Scale Bar Graphics tab.
Spot Measure- This function is only active in the live image or during Con-
ment/Focus tinuous mode.
ROI Displays a region in which the exposure time is measured
and the software autofocus is focused.
Magnetic Cur- Activated: Changes the mouse cursor. This cursor detects Alt + C
sor edges/contrast changes and automatically moves to them,
which can help you to add measurements or annotations.
Note that this cursor only works reliably on one channel, so
for multi-channel images only activate one channel in the
view options.
Paste Display Inserts the copied display settings into the image.
Settings
Best Fit Adjusts the display characteristic curve so that 0.1% of the
darkest pixels contained in the image are black and 0.1% of
the brightest pixels are white in the display.
ROI (Region of Enables you to draw a rectangular region of interest into the Ctrl+ U
Interest) > image. The ROI is displayed with red boundaries. You can
Draw Region draw several regions into an image.
of Interest
ROI (Region of Enables you to draw rectangular region of interest into the Ctr-
Interest) > image. This region can be rotated and is displayed with yel- l+Shift+R
Draw Rotat- low boundaries. You can draw several regions into an image.
able Region of
Interest
ROI (Region of Creates a new image document from the selected regions Ctrl+
Interest) > drawn into the image. All dimensions of the image are taken Shift + C
Create Subset into account here. This function works for both the non ro-
Images From tatable and rotatable ROIs (the red and the yellow regions).
ROI
Paste Inserts a graphic element into the current image from the Ctrl + V,
clipboard. Shift +
Ins
This context menu is displayed if you right click on a graphical element or within the bounding
box of a graphical element in your image.
Cut Cuts the selected graphical element and copies it to the clip- Ctrl + X,
board. Shift +
Del
Paste Pastes the graphical element(s) from the clipboard into the Ctrl + V,
image. Shift +
Ins
Bring to Front Brings the selected graphical element(s) to the front of the
image.
Send to Back Sends the selected graphical element(s) to the back of the
image.
Bring Forward Brings the selected graphical element(s) one layer forward.
Send Back- Sends the selected graphical element(s) one layer backward.
wards
Distribute Only available if you have selected at least three graphical el-
ements.
Allows you to distribute the selected graphical elements in
vertical or horizontal direction based on their centers, i.e. the
elements are realigned so that their centers all have the same
distance to each other in horizontal or vertical direction.
Merge Only available if you have selected at least two elements that
overlap.
Merges the selected graphical elements to create one ele-
ment.
Edit Points Only available for polygonal or spline elements, e.g. Contour
or Curve.
Allows you to edit individual points of the polygonal or spline
element.
Format Graph- Opens the dialog for formatting the selected graphical ele-
ical Elements ment.
In this view you see an overview of your multidimensional images. The individual images of the
images concerned are presented in a gallery. It is possible to show any combination of dimen-
sions, e.g. channels against time. When you view images for the first time in the Gallery view,
they are displayed as follows:
Time lapse image All the time points present in an image are
shown.
Multichannel & time lapse image All the time points present in an image are
shown. All channels are shown as a mixed color
image.
Multichannel & Z-stack image All Z-planes are shown. All channels are shown
as a mixed color image.
Time lapse, Z-stack & multichannel image All Z-planes are shown. All channels are shown
as a mixed color image.
See also
2 General View Options [} 1029]
Here you can specify which dimension you want to be displayed on which axis of the Gallery
view. To do this, click on the corresponding dimension button. Note that the appearance of this
tab depends whether Show All is activated or deactivated.
Parameter Description
Channels Displays the channels as individual images.
Chann.& Time Displays the channels in relation to the time lapse images.
(Channels and Time
Series)
Z&Time (Z-Stack Displays the z-stack images in relation to the time lapse images.
and Time Series)
Show Dimension Inserts annotations into each individual image that provide informa-
Labels tion on the time point or Z-plane.
Invert X/Y axis This checkbox is only available if the Show All mode is deactivated. It
is active only if two dimensions are shown in relation to each other
(Chann.&Z, Chann.&Time, Z&Time). If activated, this function inverts
the X and Y axis of the view.
Show Graphics Shows graphics/annotations within the images (in case if graphics/an-
notations are drawn in).
Show Merged Only visible for multichannel images. Only active if the channels
present are shown. Shows the pseudo colored (mixed) images of all
channels in addition to the individual images.
Parameter Description
X Axis/Y Axis The first column of dropdown lists selects which dimension (depend-
ing on which dimensions are available in the active image) is shown
on the X or Y axis (X axis = horizontal direction, Y axis = vertical direc-
tion).
In the second column of dropdown lists, you can select wether you
want to display all images of each dimension or if you want to display
a certain range of images on the X or Y axis. You have the following
options:
– All Displays all images of the active image in the Gallery view.
– Subset by If selected, you can enter a step size in the Step input field. If 2 steps
Step are entered, only every second image is shown. In the Max. input
field, you can enter the desired number of images. The step size is cal-
culated automatically.
Parameter Description
– Subset by If selected, you can adjust a range of images (e.g. from image 4-10)
Range which is displayed in the view. Use the slider or the input fields to en-
ter the desired range.
Create image from Selects the type of image that you want to create.
– Gallery View Creates an image of the current Gallery view. If this option is selected,
the option Gallery Image from is available. Here you can additionally
select a dimension that is not currently displayed (e.g. Single Image
will export each single image additionally). The resulting image is al-
ways a 24 bit RGB color image. The pixel data of the original image
are changed. If the Burn in graphics checkbox is activated, all graph-
ics or annotations will be burned into the output images.
– Selection Sub- Creates an image from the images that have been selected in the cur-
set rent view. To select an image simply click on the image in the Gallery
view. Press Crtl while clicking to select more images at once. Note
that for multi-dimensional datasets (e.g. z-stack and time series), cre-
ating a new image based on selection will create a image with the en-
tire range between the individually selected images. The new dataset
will contain the smallest continuous subset of the selected input
dataset that contains all selected images. Consequently, the entire
range of images between the first and the last selected image along
each dimension (z-stack, time series) will be exported. Export of indi-
vidual scattered images, or export of different subsets at different
time or z-positions is not possible.
– Range Subset Creates an image from the defined selection range. If this entry is se-
lected, sliders for the selected dimensions appear (Start, End and In-
terval). Use the sliders to set the selection range you want.
Create Creates the image and opens it in a new image document. The result-
ing image contains all the information of the input image; the pixel
data are not changed.
Checkbox Function
Show Dimension Anno- Inserts annotations into each individual image that provide infor-
tations in Image mation on the z-plane.
Show Graphics Displays graphics/annotations within the images (in case graph-
ics/annotations are drawn in).
Checkbox Function
Layout Sets the background color of the Gallery view and the distance
between the individual images (from 1-10 pixels).
This view provides an overview of all the scenes from a multiwell plate. This view is only available,
if all wells have the same number and size of scenes and regions. Empty wells are allowed.
1
1 Image Overview
Displays the images of all the scenes from a multiwell plate. A double click on one of the
images opens the respective scene in the standard 2D view. You can select multiple im-
ages by pressing Ctrl while clicking individual images, or you can press the Shift button
to select a range of images. You can open the selected images in a split view for compar-
ison (via the Multiwell Tools tab).
2 View Options
Displays the tabs for the general view options as well as view specific options on the
Multiwell Tools tab.
Parameter Description
Scene in Well Only visible if the document contains multiple images (scenes) per
well.
Activated: Displays only the images of the selected scene.
Deactivated: Displays all images per well side by side. The positions
do not represent the real location.
Parameter Description
Well Selection
– Compare Only available if you have selected at least two images in the Multi-
Wells well view.
Opens the selected images in a split view that allows you to compare
them.
– Create Image Only available if you have selected at least two images in the Multi-
well view and have compared them with Compare Wells.
Creates a new image from the selected wells.
– Burn in Well Only available if Display Well Labels Inside Image is activated.
Labels Activated: Burns the well labels into the image when you use Create
Image to create a new image.
– Font Opens a control to select and change the font of the text, including
the size and font type.
– Display Well Activated: Displays the well location as text directly in the image.
Labels Inside
Image
– Position Dop- Only visible if Display Well Labels Inside Image is activated.
down Sets the position of the text in the individual images.
Layout Displays settings for the layout of the individual images in the view.
– Background Opens a color selection window to set the background color of the
view.
Parameter Description
Full Screen Switches to full screen mode. To exit full-screen mode, press F11 or
ESC.
Zoom Group Here you have access to the main zoom functions, see also Zoom
Section [} 1030].
Graphics Activated: Displays graphical elements that have been drawn into
the image, e.g. annotations or scale bars. This function is activated by
default.
Copy Display Set- Copies the display settings of the current image.
tings
Paste Display Set- Inserts the copied display settings into the image.
tings
Paste Display Set- Inserts the copied display settings and the channel colors into the im-
tings and Channel age.
Colors
Parameter Description
Export Display Set- Opens a file browser to export the display settings of the current im-
tings age.
Import Display Set- Opens a file browser to import display settings from a file.
tings
Import Display Set- Opens a file browser to import display settings. and channel colors
tings and Channel from a file.
Colors
Min/Max Adjusts the display characteristic curve so that the darkest pixel is
black and the brightest pixel is white in the display. Note that this
function makes an approximation to get a quick result! For detailed
information about minimum and maximum values refer to measure-
ments conducted in the Histo view.
Best Fit Adjusts the display characteristic curve so that 0.1% of the darkest
pixels contained in the image are black and 0.1% of the brightest pix-
els are white in the display.
In the 2.5D view intensity values in a two-dimensional image are converted into a height map.
Here the highest intensity values are represented by the greatest extension in the Z-direction.
Overall this results in a so-called 2.5D or pseudo-3D image.
Info
If you are viewing a multichannel image, you can have the intensity values of the individual
channels displayed. To do this, activate or deactivate the desired channels on the Dimensions
tab.
The tool bars are arranged to the left of and underneath the image area. You can use the tools to
control the display of the 2.5D volumes in the image area.
Zoom Use this to increase the zoom factor of the image area.
Bottom thumb Rotates the 2.5D volume around the horizontal (X) axis.
wheel
Bounding Box Use this to show or hide a bounding box around the 2.5D
volume.
Show X/Y Axis Use this to show or hide the X/Y axis.
- Use this to switch back to the start view. A top view of the
2.5D volume is displayed. Lateral movements and the zoom
Start View
factor are adjusted so that the 2.5D volume can be seen at
the center of the image area.
Right thumb Use this to compress the 2.5D volume on the (Z) axis per-
wheel pendicular to the screen plane.
On the 2.5D Display tab you have 4 Render mode options for displaying your 2.5D image.
Parameter Description
Render mode
- Profiles Displays the relief divided into a number of profiles with an equal dis-
tance.
Set the number of profiles using the Grid distance slider.
Parameter Description
- Grid Displays the relief overlaid with a grid. This view supports gray levels
only.
Make the grid more closely or more coarsely meshed using the Grid
distance slider.
Invert Z axis Use this function for images that contain many large, bright regions.
Activated: Displays the lowest intensity values by means of the great-
est extension in the Z direction.
Use palette Activated: Overlays the relief with the pseudo colors that have been
set on the Dimensions tab.
Show plane Activated: Shows two blue, transparent planes in the 2.5D volume.
Set the position of the planes using the X/Y sliders.
Extract image To save an individual image in the current view, click on the Save As
button.
On the Series tab in the 2.5D view you can create a series of images in the 2.5D view. These se-
ries can be played back later as a video clip, for example.
Parameter Description
Render Series Here you can select the desired series mode:
- Turn Around Here you can define the start/stop angle and the rotation direction
X around the X axis.
- Turn Around Z Here you can define the start/stop angle and the rotation direction
around the Z axis.
- Start/Stop Here you can define the angle and zoom settings for the start and
end position of your series. The intermediate positions are interpo-
lated evenly.
- Position List Here you can define any number of positions. The positions can each
have completely different rotation, zoom and illumination settings.
Parameter Description
Stored
Preview
To obtain a preview of the series, click on Play. To end the
preview, click on Stop.
No. of Frames Here you can enter or select the number of individual images in the
series.
Parameter Description
Angle X Enter the rotation angle in the X direction with a precision of 1 de-
gree using the slider or input field.
Angle Y Enter the rotation angle in the Y direction with a precision of 1 degree
using the slider or input field.
Ambient Reduces or increases the intensity of the ambient lighting in the 2.5D
view.
Shine Reduces or increases the effect of the ambient light shining on the re-
lief.
Light height Reduces or increases the intensity of the lighting in the 2.5D view. A
small distance means a circular light source at the center, while a
large distance illuminates the scene evenly.
In the Profile view you can create intensity profiles in your image. The view is divided into 4
quadrants: Profile window (top left), Image window (top right), Profile table (bottom left),
and Interactive Measurements table (bottom right).
Info
To create an intensity profile of a certain region, select a tool on the Profile Definition tab.
Use this to draw a line across a structure of interest in the image. An intensity profile of the
drawn line is generated automatically and displayed in the Profile window. To zoom into an
aspect of the Profile window, drag out a rectangular frame using the left mouse button in the
Profile window. The selected region is displayed in enlarged form. Right-click to return to the
original view.
See also
2 General View Options [} 1029]
Tool bar
Using the tools you can add one of four different profile tools(Arrow, Polygon, Freehand, or
Rectangle) to your image. The intensity profile of each tool is shown in the Profile window.
Parameter Description
Select Changes the mouse pointer to Selection mode. You can use this to se-
lect graphic elements in the image.
Clone Use this to copy the last selected element and insert it at another po-
sition in the image.
Arrow Use this to insert an arrow into the image. The gray levels for each
pixel along the line are shown in the Profile window in the direction
of the arrow.
Polygon Use this to insert a polygonal measurement line in the original image.
Freehand Use this to insert a measurement line with a shape of your choice.
Rectangle Use this to insert a rectangular measurement region. For this tool, the
average gray values across the width of the rectangle are shown in
the Profile window and the Profile table.
Stroke Thickness Here you can enter the line width of the measurement line. For line
widths larger than one pixel, the average gray values across the line
width are shown in the Profile window and Profile table.
Parameter Description
Show profile in Activated:Adds the profile curves to the profile tool drawn into the
graphics image.
This function is only available for the Arrow tool.
Parameter Description
Normal Switches the mouse behavior for interacting with the Profile win-
dow to the standard behavior.
Reset Table Deletes all values from the interactive measurement data table.
Add to table Adds the current measurement values of tools used in the Profile win-
dow to the interactive measurements table.
Parameter Description
Grid Distance Adjusts the value with the slider or input field. It defines how the
measurement is done:
For a grid size of 1, each pixel covered by the profile tool is measured.
For a grid size of 2, the average of 2 pixels is measured. For a grid
sized of 3, the average of 3 pixels is measured etc.. This is useful
when measuring line profiles in very large tile images e.g. from Ax-
ioscan.
Here you can configure the display for the Profile view.
Show section
Parameter Description
Profile Table Activated: Shows the profile table.
Channel section
Here you can activate or deactivate the profiles for each color channel for an RGB image. Gray:
shows the average intensity value for all three color channels. This function is only available for
RGB color images.
Parameter Description
Auto Sets the limits for the axes automatically based on the available pixel
values in the image..
Norm Normalizes the profile display to the maximum values of the distribu-
tion.
Fixed Enter the min/max values for the profile display in the Min/Max input
fields manually.
TheHisto(Histogram) view shows you the gray value histogram of your image. In the right image
area you can see your current image and in the left image area you can see the Histogram win-
dow. At the side you will also find four data tables:
§ In the first table from the left you will find all the raw data for each channel.
§ In the second table from the left you will find all the limits for each channel of the image next
to the image name.
§ In the third table from the left you will find the statistical values for the gray value distribu-
tion, e.g. average, standard deviation, minimum and maximum value.
§ The fourth table shows the values of measurements in the histogram. The results (Integral)
show the percentaged fractions of the occurences.
See also
2 General View Options [} 1029]
Tools
With these tools you can add specific ranges to your image. The histogram window displays the
gray value histogram for each area.
Parameter Description
Changes the mouse pointer to Selection mode. You can use this to se-
lect graphic elements in the image.
Select
Clone Use this to copy the last selected element and insert it at another po-
sition in the image.
Add to Table Only active if a measurement (using CaliperX mode) was drawn into
the histogram.
Adds the current measurement into a measurement data table below
the original image.
Reset Table Deletes the measurement data table below the original image.
Parameter Description
Histo Table Select the type of gray value distribution from the dropdown list. The
following types are available:
- SumUp
- SumDown
Relative Frequency
Parameter Description
Bin count Enter the Bin count using the slider.
Lower Threshold Enter the lower threshold value for the gray value distribution using
the slider or spin box/input field. All regions in the image with gray
values below the lower threshold value are overlaid in blue and all
those with gray values above the upper threshold value are overlaid in
red.
Skip Black Activated: Automatically subtracts the lowest value of the gray distri-
bution. If activated, the settings for the lower threshold value are de-
activated.
Upper Threshold Enter the upper threshold value for the gray value distribution using
the slider or spin box/input field. All regions in the image with gray
values below the lower threshold value are overlaid in blue and all
those with gray values above the upper threshold value are overlaid in
red.
Skip White Activated: Automatically subtracts the highest value of the gray dis-
tribution. The settings for the upper threshold value are deactivated.
Here you can configure the display for the Histo view.
Show section
Parameter Description
Statistic table Activated: Shows the table containing the statistical values in the im-
age area.
Int. Measurement Activated: Shows the measurement data table below the original im-
age.
Channel section
Here you can activate or deactivate the histograms for each channel.
Parameter Description
Auto Sets the limits for the axes automatically.
Norm Normalizes the histogram display to the maximum values of the distri-
bution.
Fixed Enter the min/max values for the histogram display in the Min/Max
input fields.
Logarithmic
Parameter Description
Channel Trans-
parency
In this view measured values from images are displayed in a table. The table is only visible if there
are annotations/measured values in the image. To highlight the row of the table containing the
measured values of a graphic element, click on a graphic element in the image. To highlight a
graphic element in the image, click on the measured value in the row of the table.
Here you can specify how to draw the graphic elements for measurements into an image and
how the measurement data are displayed. You can also add user-specific features to individual
graphic elements.
Parameter Description
Channel Activated: Activates the Single Channel mode. Only draws graphic el-
ements into the channel currently displayed.
Time Activated: Only draws graphic elements into the time point currently
displayed.
Z-Position Activated: Only draws graphic elements into the Z-position currently
displayed.
Copy in All Follow- Activated: Draws a new graphic element into the view currently dis-
ing played and into all subsequent time points or Z-positions.
Parameter Description
Name Here you can enter a name for the feature.
Value Here you can enter the desired value for the current graphic element.
Unit Here you can enter the desired unit for the feature.
Add Adds the feature. The measurement data table is expanded to include
this feature.
Parameter Description
Format
- Table Displays the measured values in a row of a table. As you can specify
the features individually for each graphic element, the number of col-
umns containing measured values may differ from graphic element to
graphic element (i.e. from row to row).
Current View Only displays the measured values of the current view.
Create Document Creates a measurement data table from the measured values dis-
played. The table is saved as a separate document.
Parameter Description
Export Tempera- Only visible for cryo images with temperature data.
ture Data
Opens the temperature data as a table in ZEN.
The Info View allows you to display extensive information about your image. Using the
buttons in each of the sections you can show additional fields in the sections or hide fields that
are currently showing. To show or hide individual sections, click on the button to the left of
the headings for each of the sections.
Info
The Info View only shows the fields that actually contain data. Using the buttons in
each of the sections you can show additional fields. To do this, activate the corresponding
checkboxes in the context menu.
Parameter Description
Title Here you can enter a title for your image.
Rating Here you can enter a rating for your image. To enter a rating, click on
the star icons.
Parameter Description
Name Displays the file name of the image without file extension.
File Path Displays the location where the image is saved in your file system.
User Displays the name of the user. You can enter the user name in the
Tools > Options > User [} 844].
Parameter Description
Compression Qual- Displays the compression quality.
ity
Parameter Description
Time Series Displays how many time points the image contains. The value in
brackets shows the full duration of acquisition.
Z-Stack Displays how many z-planes the image contains. The value in brackets
shows the full size of the z-stack.
Tiles Displays how many individual images (tiles) the image is composed of.
Image Size (Pixels) Displays the image size in pixels. The first number indicates the hori-
zontal dimension and the second the vertical dimension.
Frame Store Reso- Displays the frame store resolution, the pixel size of a single tile for
lution SEM images.
Image Size (Scaled) Displays the scaled image size. The first number indicates the horizon-
tal dimension and the second the vertical dimension.
Bit Depth Displays the bit depth of the active image, e.g. 24 Bit. The bit depth
depends on the camera settings when acquiring the image.
Stage Position Displays the stage position. Within the image this is the center point.
In the case of tile images this is the center point of the first tile.
Scanning Mode Displays the scanning mode. This can either be the image field, an im-
age line or a pixel.
Scanner Zoom Displays the zoom factor. The value 1 corresponds to the standardized
image field of all confocal systems.
Rotation Displays the rotation of the image field around the optical axis.
Crop Offset Displays the shift of the scanned region from the center of the image.
Pixel Time Displays for how long the emission signal is collected per pixel. This is
the so-called integration time.
Line Time Displays how long the system needs to scan an image line.
Frame Time Displays how long the system needs to scan the image field displayed
in X and Y in full.
Parameter Description
Image Center Posi- Displays the stage position of the most central pixel in the image for
tion an x/y stage.
ROI Center Offset Indicates the offset of the scan field relative to the maximum scan
field.
The Edit Scaling dialog is divided up into table form. The columns contain the Scaling Factor
and Scaling Unit and the rows the dimensions.
Parameter Description
Scale Factor Sets the desired scaling factor in the input fields.
Scale Unit column Selects the desired scaling unit from the dropdown list. The met-
ric units Meter, Centimeter, Millimeter, Micrometer,
Nanometer and Picometers are available as options, as well as
the imperial units Inch and Mil.
Row Z Shows the scaling in the 3rd dimension. This is usually the focus
direction.
Info
Row Z for the third dimension is only displayed if the image has a third dimension.
Parameter Description
Acquisition Start Displays the date and time when the acquisition of the image
took place.
Zoom
Filters
Beam Splitter Displays which beam splitter was used to acquire the channel.
Lasers
Parameter Description
Aperture size
Magnification
Accelerating Voltage
Working Distance
Scan Speed
Probe Current
Specimen Current
Brightness
Contrast
Noise reduction
Tracks
Reflector Displays which reflector cube was used to acquire the image.
Beam Splitter
Contrast Method Displays the contrast technique. In transmitted light this is the
condenser setting, while in reflected light it corresponds to the
selected reflector cube.
Ligth Source
Ligth Source Intensity Displays the lamp intensity with which the image was acquired.
Illumination Wavelength
Laser Wavelength
Laser Blanking Blanking of the laser during scanner movement without acquisi-
tion.
Stokes Vector
Scan Mode
Rotation
Crop offset
Pixel Time
Line Time
Frame Time
Parameter Description
Scan Direction
Line Step
Averaging
Averaging Mode
Averaging Method
VivaTome Grid
VivaTome Reflector
Channels
Channel Description Here you can enter a description of the channel. Describe the ex-
act use of the channel or what can be seen in this channel.
Emission Wavelength Displays the main emission wavelength of the channel or dye
used.
Excitation Wavelength Displays the main excitation wavelength of the channel or dye
used.
Effective NA
Detection Wavelength
Imaging Device
Camera Adapter Displays which camera adapter was used to acquire the image.
EM Gain Displays the factor by which the camera signal was increased.
Exposure Time Displays the exposure time with which the image was acquired.
Depth of Focus Displays the depth of focus. This is calculated according to the
following formula: Depth of field = (2 * n * λ) / (NA)² = (2 * re-
fractive index * emission wavelength) / (numerical aperture)²
Binning Mode Displays whether binning was applied during acquisition and
how much.
Detector Gain Displays the gain setting of the detector for acquisition.
Parameter Description
Detector Digital Gain Displays the digital gain of the detector during acquisition.
Detector Offset Displays the offset settings of the detector during acquisition.
HDR Processing
Airyscan Mode
Info
In the case of multichannel images the channel-dependent information is saved in a table.
Here the sorting of the individual information fields may differ.
This section is only visible if your image was processed with Direct Processing.
Parameter Description
Processing Func- Displays the name of the function which was used for Direct Process-
tion ing.
Completed Displays the date and time when the processing was finished.
This section is only visible if the image was processed with Deconvolution. The displayed param-
eters depend on which algorithm was used.
Parameter Description
Algorithm Displays which algorithm was used for processing.
Aberration Correc- Displays what/if aberration correction was used for processing.
tion
GPU Acceleration Displays what/if GPU acceleration was used for processing.
Quality Threshold Displays the quality threshold which was used for processing.
Normalization Displays the normalization method which was used for processing.
Strength Displays the normalization strength which was used for processing.
Stack Correction Displays what/if stack correction was used for processing.
Likelihood Displays the likelihood calculation which was used for processing.
Parameter Description
-> Clipboard Saves the parameters to the clipboard. For reuse of parameters for
new processing, see also Step 4: Info View and Re-Using Deconvo-
lution Parameters from a Processed Image [} 600].
If your image document contains a label and preview image, the images are displayed in this sec-
tion.
Parameter Description
Copy Label Image Creates and opens a new image document containing the label im-
age.
Copy Preview Im- Creates and opens a new image document containing the preview im-
age age.
The Tree View is visible only if you have activated the Enable Tree view checkbox under Tools
> Options > Documents. The checkbox is deactivated by default.
The tree view shows a detailed list containing all meta data of the selected image.
These image views are only visible if the image has corresponding features. The Split view, for ex-
ample, is only visible for multichannel images.
In this view you can display images that are acquired in Lambda mode, see Lambda Mode. the re-
sulting images are called Lambda stacks. For that type of image the 2D View is not available.
Instead the Lambda View displays a Lambda Stack in a wavelength-coded color view as default. A
color palette, mimicking the emission wavelength of the channel, is automatically assigned to the
individual lambda images which are then displayed in a merge-type display.
On Display tab, the channel-specific settings of brightness, contrast and gamma can be handled
as described for channels in the 2D View.
In order to use other views (e.g. Split or Gallery view) or to view lambda stack data sets in ZEN
black, convert the data set using the Convert to Lambda image processing function.
The general view options on Dimension tab are adapted to the Lambda View with the following
changes:
Parameter Description
Channels Displays the single channels of a Lambda stack image as a colored
button. You can handle the channels like in the 2D View. E.g. if you
click on a channel button you can show or hide the channel in the im-
age area.
Lambda Coded Activated: All channels are displayed as a merged image. Each chan-
nel is assigned to a channel color that represents the recorded emis-
sion wavelength in the lambda stack.
Deactivated: Only one channel of the Lambda Stack is displayed
without pseudo coloring. Additionally the Single Channel checkbox
is activated and cannot be changed. To display a different channel of
the Lambda Stack, click on the according channel. This will display the
chosen channel and deactivate the previously displayed channel.
See also
2 General View Options [} 1029]
Airyscan multiplex images can be directly viewed during acquisition in 3D ortho view.
Select the Ortho view tab to switch to this display mode.
In Ortho Display, you can adjust the areas which are sectioned for the 3D view.
Ortho Display also allows to display a Maximum Intensity Projection (MIP) in all 3 directions.
Activate the Maximum Intensity Projection (MIP) checkbox to activate this view. It is recom-
mended that only fully acquired stacks are viewed with this function.
The result is a brighter XY image, and a very bright XZ and YZ view. This is because very many
slices are projected here. It might also occur that the XZ and YZ view show stripes. This is due to
the preview processing and often disappears once the full datastack was processed and saved.
Parameter Description
Cut Lines Sets the positions (pixel values) for the section lines using the X/Y/Z
sliders or input fields.
Alternatively you can also adjust the positions directly in the image
area. To adjust the positions, move the mouse over a section line in
the image. Hold down the left mouse button and move the mouse.
Parameter Description
Cut Line Opacity Only visible if the Show All mode is activated.
Here you can enter the degree of opacity of the section lines from 0%
(invisible) to 100% (completely opaque).
Maximum Inten- Activated: Displays a maximum intensity projection (MIP) across all
sity Projection planes for all 3 views. The section lines are hidden and the control ele-
(MIP) ments that are not relevant in this view are deactivated.
New Image Creates a new image document. Select the desired view from the
dropdown list (only in Show All mode). To save the image, click on
the Create button.
The resulting image contains the image data in the same dynamics
(bit depth) as the original image and consists of the same number of
channels (in the case of multichannel images) or time points (in the
case of time lapse images) as the original image, but only contains the
Z-plane currently displayed.
This view displays the individual polarization channels of your image as a multi image view.
1 Image View
Groups the acquired channels based on the contrast method and displays them in sepa-
rate image containers. Every polarization channel (xPol, pPol, cPol) is displayed in each
own container as well as brightfield and fluorescence channels (e.g. if there are several
brightfield channels, they are all dispalyed in one brightfield container). In the Pol View
Display Tab [} 1021] you can toggle the display of the individual contrast methods.
2 View Options
Here you have the general view option of the Dimensions Tab [} 1029] and Display Tab
[} 1043] which both control the settings of the currently selected image container. Addi-
tionally, you have the view specific controls on the Pol View Display Tab [} 1021].
Parameter Description
Channel buttons Toggles the visibility of the individual channels of the image.
Synchronize Rota- Activated: Synchronizes the fading and the rotation for all channels.
tion
Only visible for multichannel images and not during the acquisition of LSM images.
In this view you see all channels of a multichannel image. The channels are displayed side by side,
in the channel colors that have been assigned to them. You also see the mixed image view in
which all the channels are overlaid.
Info
By double-clicking on an acquired multi-channel image, you can switch quickly to the 2D view.
Double-clicking on the image in the 2D view switches you back to the Split view. If you dou-
ble-click on one of the displayed channels, only this channel will be shown in 2D View.
See also
2 2D View [} 990]
2 General View Options [} 1029]
This view is only visible for multi-channel fluorescence images. It is used to:
§ display the spectra corresponding to defined ROIs (mean ROI intensity over the wavelength),
§ show the intensity values in table form, copy the table to clipboard or save the table as a text
file and
§ generate linear unmixed multi-channel images.
In this view you see two areas as default. The intensity-over-wavelength diagram (1) to the left
and the image display (2) to the right. Below the image area you have the specific view options
(3).
Here you can select various tools and use these to draw graphic elements into your images, simi-
lar to the tool bar on the Graphics tab. You can also obtain an overview of the graphic elements
that you are using in your image.
The following list describes the specific parameters for this tab:
Parameter Description
Toolbar Using the tools you can draw in certain regions of interest which are
then displayed in the intensity-over-lambda diagram and will be used
for linear unmixing.
- Use this to select the graphic elements in the image area. If you are
currently in another mode, you can switch back to the selection mode
Select
using this button.
- Use this to create an identical copy of the last graphic element drawn
in by simply clicking anywhere into the image area. To exit this mode,
Clone
either switch back to the selection mode or press the ESC key.
Parameter Description
- Use this to draw in a circle. This element by default also shows the
mean gray level of the image region.
Draw Circle
- Use this to draw in a freely selectable contour. You can either define
the corner points by a series of clicks or you can trace a contour by
Draw Spline
keeping the left mouse key pressed. Close this contour by right-click-
Contour
ing. Corners are always rounded with this tool. This element by de-
fault also shows the mean gray level of the image region.
- With this tool you can draw a cross marker into the image, where the
Draw Cross pixel at the center of the cross is taken for the unmixing measure-
ment.
Note: The 10x10 pixel rectangle around the cross is only a visual aid
to locate the marker, the unmixing is only taking the one pixel at the
center into account and not this rectangular bounding box.
List of Spectral The list gives you an overview of the spectral data in the image, which
Data will be used for linear unmixing. The names indicates the origin, e.g. if
manually or automatically picked by ACE (see below) or loaded from
the spectra database.
To load a spectrum from the spectral database, press the Add + but-
ton for a new row. Click into the Name column and select the accord-
ing name for the needed spectrum.
Save to spectra If you click on this button you can save the selected entry to the spec-
database tra database.
Linear Unmixing Performs the linear unmixing processing of the image with the se-
lected spectra. Note: The channels of the multichannel image which
are de-selected in the Dimensions tab are not included in the calcula-
tion.
Widefield Only visible if the number of spectra is equal to the number of chan-
Crosstalk Removal nels in the image. This option allows Unmixing to be performed on
multichannel fluorescence images created without a spectral detector.
Typically this would be multichannel images acquired with a filter
based multichannel microscope system. In this case, the function will
automatically create the same number and type of channels present in
the input image for the output image of the Unmixing function.
Activated: Removes the crosstalk of widefield channels and ensures
that the channel information (emission and excitation wavelength)
and metadata of the input image are copied to the output image.
Note that the function tries to automatically identify which channel
matches which spectrum. If this fails, the function assumes that the
order of spectra matches the sequence of the channels.
Parameter Description
Show Table Activated: Displays a table of intensity values over wavelength below
the default image area.
Subtract back- Here you can select the list entry of a marked spectrum that should be
ground subtracted before linear unmixing.
Channel
- Raw The raw data acquired during a lambda stack is used as channels and
for spectral display.
- Spectral The intensity data of the lambda stack is calculated into channels for
each detector (Channel 1 and Channel 2).
This tab provides several parameters to change the appearance and contents of the spectral
graph. For the beginning we recommend to use the default settings here.
Parameter Description
Line Interpolation Selects how the line in the diagram is interpolated. You can select
Line or Bezier.
Area Appearance Selects the pattern type of the area below the line. The slider on the
right sets the opacity for the area.
Show Marker Activated: Displays the markers of the inflexion points of the spectral
graph.
– Fixed Enables you to select Form and Size of the markers manually.
Show Series Val- Activated: Displays the intensity values at the inflexion points of the
ues spectral graph. You can also select how the number should be dis-
played:
§ Above Horizontal
§ Above Vertical
Parameter Description
§ Inside Vertical
§ Stacked
– Auto Automatically selects the number of decimal places for the series val-
ues displayed in the diagram.
Y-Axis Selects the scaling of the y-axis. You can select Auto, Norm, or Fixed
scaling.
This view becomes available after Localization Microscopy processing of a 2D- or 3D-SMLM
dataset or during online processing of a 2D-SMLM or 3D-SMLM acquisition. The displayed image
is a vector map of plotted localizations (centroids) that are by default Gauss rendered.
See also
2 Lattice SIM Family [} 1302]
2 General View Options [} 1029]
2 Center Screen Area [} 26]
This tab enables you to view statistical parameters associated with each peak or object.
Parameter Description
Index Assigns a number to each peak/object.
First Frame Displays the number of the frame in which the peak/object appears
for the first time.
Frames Missing Displays the number of frames a peak/object was missing. For un-
grouped data this will always be 0.
Z Position [nm] Displays the position in nm of a peak/object in the Z direction with the
origin at the lowest z-plane (for 3D SMLM data sets only).
Parameter Description
Precision z [nm] Displays the calculated localization precision of a peak/object in nm in
z.
Chi Square Displays the X2 value of the Gauss fit for the localization precision.
PSF Width [nm] Displays the width (1/2 FWHM) of the point spread function (PSF) cal-
culated from the Gauss fit in x,y (displayed for 2D data sets only).
Z Slice Indicates the slice of a multi-stack experiment (3D SMLM data sets
only).
This tab enables you to view statistical parameters associated with each peak or object.
Parameter Description
Show Molecule Ta- Activated: Displays the localized molecules in a table in the image
ble area, which lists each peak (for ungrouped data) or object (for
grouped data).
Show Statistic Plot Activated: Displays a histogram or scatter plot of the parameter se-
lected as Histogram Source.
Histogram Source Selects the parameter that should be plotted in the histogram. All pa-
rameters listed in the Molecule Table are available. The selected pa-
rameter is plotted versus its frequency.
Histogram Range Sets the Quantil level for the data. The histogram is rescaled in a way
that only those x-values are displayed that accommodate the input %
value of the data starting from 0. Hence outliers with high values can
be removed from the display.
This tab enables you to filter parameters regarding their values. Activate the respective checkbox
to activate the filter.
Parameter Description
Localization Preci- Enables you to filter for the calculated localization precision of a peak/
sion [nm] object in nm in X, Y (min=1, max = 500).
Number of Pho- Enables you to filter for the total number of photons of a peak/object
tons (min = 1, max = 10000).
Parameter Description
First Frame Enables you to filter for the number of the frame in which the peak/
object appears for the first time (min = 1, max = number of frames of
the time series).
Background Enables you to filter for the variance of the number of background
photons (min = 1, max = 5000).
Chi Square Enables you to filter for the X2 value of the Gauss fit for the localiza-
tion precision (min = 0, max = 100).
Parameter Description
Pixel Resolution Sets the size of the pixel in x, y in nm of the SMLM image.
XY [nm/pixel]
Z-Slice Thickness Sets the size of the voxel in z in nm of the SMLM image.
[nm/pixel]
Display Mode Selects the mode how the SMLM channel is displayed.
– Gaussian Ren- Localized peaks/objects are displayed with the localization error and
derer rendered to a Gauss function. The higher the precision, the smaller
the width of and the brighter the peak/object. Note that areas with
higher molecule densities are also brighter.
– Molecule Den- Number of peaks/objects per area (in μm). Associated with the Molec-
sity ular Density map is a scale bar that shows the color code of molecule
density in Molecules/μm2 according to the SMLM channel assigned
look up table (LUT).
Expansion Factor The set number will be multiplied with the PSF (localization precision
[xPSF] error), so the rendered peaks/objects get smaller for numbers < 1 and
bigger for numbers > 1 in X, Y and Z. A warning is displayed if the
slider value drops below 0.71 as the data will be rendered more accu-
rately than common fitting routines would allow.
Render Automatic Activated: The set value indicates the Quantil, up to which percent-
Range SR [%] age of data (stating with the lower values) will be displayed. The input
number therefore sets the threshold for the maximum range, so the
super-resolution (SR) image will be fully scaled from 0% to input num-
ber %. Higher values out of the input range would be displayed as
saturated pixels.
Parameter Description
Extract to Image Converts the vector map of Gus rendered centroids to a bona fide
“.czi” image using pixel/voxel sizes as set in Pixel Resolution XY and
Pixel Resolution Z.
This tab enables you to perform a model based drift correction on the SR image.
Parameter Description
Apply Drift Correc- Only available if Calculate Correction has been executed.
tion in X/Y Activated: Applies the drift correction in X and Y to the image.
Apply Drift Correc- Only available if Calculate Correction has been executed.
tion in Z Activated: Applies the drift correction in Z to the image.
Show Drift Dia- Only available if Calculate Correction has been executed.
gram Activated: The drift in x, y and z direction versus the frame number
(time) is displayed as separate graphs in the drift diagram.
Calculate Correc- The software identifies a linear drift on sample structures on behalf of
tion redundant correlation and uses them to automatically correct for this
drift. The correction vector calculated from the average of all struc-
tures is then applied to the SMLM image (SR channel).
Segments Sets the number of segments used in the model based drift correc-
tion. The number of frames of the SMLM time series within each seg-
ment is determined by the total number of images divided by the
number of segments rounded up or down to obtain integer numbers.
Hence the last segment might contain less, equal, or more frames
than the preceding segments. The drift correction is performed for
each segment separately and then the drift between the segments is
corrected.
This tab enables you to assign and group identified peaks to one object (molecule).
Parameter Description
Max On Time Sets the maximum number of frames that peaks within the defined
capture radius are allowed to be detected to be considered to belong
to one molecule. If peaks at the same location are found in more con-
secutive frames, then all peaks are discarded.
Off Gap Sets the maximum number of frame peaks within the defined capture
radius that can be missing in order that the peaks are regarded to be-
long to the same molecule. If the defined number is exceeded, then
the peaks will be sorted to two molecules and not grouped.
Parameter Description
Capture Radius Sets the maximum radius in pixels (as defined by the set rendered lo-
calization precision in x,y and z) within which peaks of consecutive
frames must lie in order to be regarded to belong to the same mole-
cule.
Here you configure the settings for how the image is displayed on the screen. You can select the
size of the display and call up information about the content of the image. In the case of multidi-
mensional images you can select here which dimension is displayed. The dimension sliders (e.g.
time, channels) help you to navigate through the single images of an experiment.
Note: The displayed settings in the tab depend on the image and experiment. Also the settings of
this tab in the 2D and 3D Correlative Workspace viewer are limited and/ or different compared to
the normal ZEN image view.
Depending on how many dimensions your image contains, several sliders are available in this sec-
tion. With the sliders you can adjust the currently displayed position for each dimension of the im-
age. The currently displayed can be seen and also set in the input field next to the slider.
The Play buttons to the right of the input fields enable you to play back the dimension au-
tomatically. This takes place at a rate of 5 images per second by default. You can change the
speed on the Player Tab [} 1034].
Info
For images with more than 3 dimensions, a scrollbar is displayed which you can use to access
the other sliders.
Parameter Description
Follow Acquisition By activating the Follow Acquisition checkbox during acquisition of
multi dimensional experiments (time series, tiles and z-stacks) the ac-
tual image is displayed (default value).
Parameter Description
By moving the Time slider in the Dimensions tab, this feature is dis-
abled and a button for performing a manual update of the slider
range is shown.
Phase Only visible for image acquisition with phases in Structured Illumina-
tion Microscopy (SIM).
Adjusts the desired phase position.
Global-Z Only visible for ZEN Connect projects with at least one z-stack.
Sets the global value for the displayed z-slices. The range of the slider
is defined by the z values of all stacks in the project.
Note: When you use the Global Z slider and are beyond the range of
a certain image, only a frame for this image is displayed to show
where the image is positioned.
Block Only visible if you have used the Experiment Designer and created
several experiment blocks.
Adjusts the desired experiment block.
Total Time Only visible if you have used the Experiment Designer.
Adjusts the duration across all blocks.
Phase Only visible for ApoTome images and if on the Apotome Tab [} 1046]
the Display Mode is set to Raw Data.
Selects the various phases of the raw images.
This section contains tools to adjust the size of the displayed image region.
Parameter Description
Automatically sets a zoom factor at which the entire image can be
displayed visibly on the screen. Alternatively, you can use the shortcut
Fit to View
Ctrl + 0.
Parameter Description
Decreases the zoom factor. Alternatively, you can use the shortcut
Ctrl + F8.
Decrease Zoom
Increases the zoom factor. Alternatively, you can use the shortcut Ctrl
+ F7.
Increase Zoom
Here you can set the display size steplessly. The desired zoom factor
Zoom factor can be entered in the input field in percent.
Parameter Description
Activates the selection mode. Use this to e.g. select a graphic element
in the image.
Selection
Activates the zoom mode. Hold down the left mouse button and drag
out a selection rectangle. When you release the left mouse button,
Zoom Rectangle the region within the rectangle is displayed in enlarged form.
If you have a mouse with a mouse wheel, you can also use this to
move enlarged/reduce image regions.
Activates the show values mode. If you move the mouse pointer into
the image region, a vertical arrow and a display field will appear. The
Show Pixel Values pixel values of the position to which the arrow is pointing are dis-
played in the display field.
In the first line of the display field the X/Y coordinates are displayed.
The second line displays the X/Y coordinates in scaled units. In the
other lines the gray values for each channel are shown.
Navigator Opens the Navigator window in the image area. There you will see
an overview of your image and you can navigate to different positions
using a rectangular window.
Parameter Description
Interpolation Activated: The pixel elements of the image are shown in an interpo-
lated display. This makes it possible to avoid the pixelated display of
small or greatly enlarged images.
Deactivated: The pixel elements of the image are displayed as they
are. This function is activated by default.
You can deactivate this function in Tools > Options > Documents.
See also
2 Documents Tab [} 835]
This section contains all channels that you are using in your image. You can switch the display of
channels in images on or off and change the channel colors (pseudo color assignment).
Info
For images with 8 or more channels, the Channel buttons are reduced in size. In this case it is
no longer possible to change the color channel by channel.
Parameter Description
Single-Channel Activated: Only a single channel is displayed.
Parameter Description
This function helps you to set your acquisition settings, camera expo-
sure or detector gain, so that saturation of the detector is avoided.
The range indicator function is not available for the sum channel of
the Airyscan.
Quick Color Setup Opens a dialog that allows you to select a color quickly for all chan-
nels of a multichannel image. The following options can be set:
- Via LUT Colors for all channels are selected using a reference look-up table.
The LUT is divided up into as many sections as there are channels,
with the channel color being used at the separation point. You can
select the reference LUT using the Reference LUT button.
- Dye The color of the dye used during the experiment is displayed
Here you can select a pseudo color for the selected channel. In the lower area of the dialog you
will see four buttons that offer various methods of color selection. The selected button is high-
lighted in blue. To change the method, simply click on the appropriate button.
Parameter Description
Weight Sets the weighting of one channel to another channel. This is only
possible with multi-channel images.
Color Here you can choose the desired color from a default color chart. The
selected color is displayed on the color button.
Player Options
The following control elements are available:
Parameter Description
Plays back the image series forwards from first to last image. The di-
mensions are played back one after the other in the sequence speci-
Play
fied.
Plays back the image series backwards from last to first image. The di-
mensions are played back one after the other in the sequence speci-
Play
fied.
Parameter Description
Speed (FPS) Here you can adjust the speed at which an image series is played
back.
The speed is displayed in frames per second (FPS) in the input field.
You can also enter the desired speed directly in the input field. The
maximum play-back speed is 25 FPS.
Follow Acquisition Activated: Always displays the last acquired image during an ongo-
ing acquisition procedure, as well as the slider for the corresponding
dimension.
Dimensions Depending on the available dimension the active image a slider is dis-
played here for each dimension. Possible sliders:
§ Z-Stack
§ Time
§ Scene
§ Block
The sliders have each two adjustment handles, which you can use to
define the start and end point of the playback.
If there are several dimensions, you can determine, by activating the
corresponding checkbox, if you want the dimension to be taken into
account during the play-back.
Each slider offers as many steps as there are individual positions in the
specified dimension.
A third adjustment handle indicates the current position and cannot
be controlled directly.
Here you can select various tools and use these to draw graphic elements into your images. In the
list you see the graphic elements that you have drawn in to the image.
Global Graphics In general there are two classes of graphic elements: global and custom. Each global graphic ele-
ment has a set of properties such as style or type of measurement values displayed, which can be
changed system wide for each element. Each global element can only have one formatting style
which is used every time, this element is being used. All graphic elements can be accessed
through the Graphics menu, a selection of the most important tools is also available in the
Graphics view options tab for quick access (see image below). The content of the Graphics tab
cannot be modified however.
Custom Graph- Custom graphic elements are available in the Custom Graphics tab. Here it is possible to config-
ics ure a collection of graphical elements according to personal preference. It is also possible to cre-
ate multiple copies of the same tool type with different formatting styles and measurement values
which is not possible for global graphic elements. For more information, see Custom Graphics
Tab [} 1042].
Info
Graphic elements are characterized by their formatting style, can be annotated with a free text
and can contain measurement values such as geometric or gray value measurements. Add free
text by double clicking any graphic element and typing in the desired text.
Note: You can select the graphical elements and modify them. For more information, see Editing
Graphical Elements and Measurements [} 1041].
A selection of global graphic elements to work with are available to you here. For more tools,
open the Graphics [} 822] menu.
Parameter Description
Use this to select the graphic elements in the image area. If you are
currently in another mode, you can switch back to the Selection mode
Select
using this button.
Use this to create an identical copy of the last graphic element drawn
in by simply clicking anywhere into the image area. To exit this mode,
Clone
either switch back to the Selection mode or press the ESC key.
Use this to insert a text field into the image. With the field drawn in
start typing to add text.
Draw Text
Automatically inserts a scale bar into the bottom right corner of the
image.
Insert Scale Bar
The size is set automatically to approximately 5% of the width of the
image size. The length can be modified by selecting the scale bar in
the image and changing the length.
Parameter Description
Use this to draw in an arrow.
Draw Arrow
Use this to draw in a circle. This element by default also shows the
mean gray level of the image region.
Draw Circle
Use this to draw in a freely selectable contour. You can either define
the corner points by a series of clicks or you can trace a contour by
Draw Spline Con-
keeping the left mouse key pressed. Close this contour by right-click-
tour
ing. Corners are always rounded with this tool. This element by de-
fault also shows the mean gray level of the image region.
Format Only active, if you have selected a graphical element in the image or
the Annotations/Measurements list.
Opens the Format Graphic Elements dialog [} 1040]. There you can
format the selected graphic element according to your preference.
Alternatively you can double-click on the list entry or right click on the
graphical element in the image and select the Format entry.
Adds a text box to the top left of the image with information on the
relative acquisition time per channel. Relative means, that the time
Relative Time
value is set to 0 at the time point, where the element is drawn in. This
makes it easier to analyze the time it took for image dimensions such
as channels or scenes to be acquired.
Adds a text box to the top left of the image with information on the
absolute acquisition time (either the image or for channel, in the case
Acquisition Time
of multichannel images).
Adds a text box with information about the relative focus position.
Relative means, that the focus value is set to 0 at the z-plane the ele-
Relative Focus
ment is drawn into. This makes it easier to interpret focus changes
when playing through the z-dimensions.
Adds a text box with information about the absolute focus position as
recorded from the focus drive of the microscope.
Focus Position
Adds a text box with information about the exposure time used by
the camera given in the format "00.000" [ss.msec].
Exposure Time
Adds a text box with the names of all the active channels.
Multi Channel
Name
Keep Tool Activated: Keeps the selected tool active. This allows you to draw in
a number of the same elements one after the other.
Parameter Description
Auto Color This parameter is only visible if Show All is activated.
Activated: Uses a new color for each element drawn in.
Layers
Only visible if the Show All mode is activated.
Here you can specify which graphic element layers are active and visible in the image. To open the
shortcut menu, click on the Layers dropdown menu. Note: Whole slide images contain many dif-
ferent layers such as regions used for automatic shading correction, defined scan regions, etc.
Parameter Description
Active Layer Here you can specify which graphics layer is active in the image. The
other layers are visible but blocked. To activate a layer click on the
menu entry.
- automatic Sets the active layer automatically. This is the default setting.
- Selection Sets the Selection layer as the active layer. This layer contains graphic
elements such as ROI selection, Grid, etc.
- Annotations/ Sets the Annotation/Measurement layer as the active layer. This layer
Measurements contains most of the graphic elements which can be drawn in such as
all annotation elements or interactive measurement tools.
- Acquisition Acquisition elements are elements which have been used in experi-
ments to specify acquisition ROI’s or photomanipulation ROI’s such as
used for FRAP experiments.
Layers Here you can specify which layer is visible in the image. The other lay-
ers are not visible.
Table
Here you can see almost all the graphic elements that exist in your image. You can also control
the behavior of the graphic elements here, e.g. lock or hide them. You can format each graphic
element as you wish.
Info
In the list you will only see the graphic elements relating to the active graphics plane. To
change the active graphics plane, click on the Layers button. This button is only visible in
Show All mode. Select the layer that you want to display under Active Layer.
Parameter Description
Shows or hides a graphic element.
Visibility
Type Displays the icon for the tool type. To format a graphic element, dou-
ble-click on the icon. The Format Graphic Elements dialog then opens.
Name Displays the name of the graphic element. To change the name, dou-
ble-click in the Name field. Then enter the text of your choice. This
can be used to label elements in your image.
Saves the selected graphic element for reusing it with other images.
Save
Dimension
The coordinates and dimensions of the selected graphic element are displayed in the correspond-
ing input fields (standard unit = image pixels):
Parameter Description
Scaled µm Activated: The dimensions are shown in scaled unit.
Parameter Description
H Displays the height of graphic elements.
Change the height here.
Angle Displays the angle of rotation of graphic elements. Here the measured
angle is displayed for the graphic element Angle.
Change the angle here.
See also
2 Adding Annotations to Images or Movies [} 61]
This dialog can be called up via the menu Graphics > Format or via the Graphics view option
tab.
Note that for the Draw Region of Interest function, a reduced amount of functionalities is avail-
able.
Parameter Description
Zoom with image If activated, the size of the graphic element (e.g. line width, given in
nr. of pixels) is related to the pixel size in the image. Therefore, when
zooming into the image, the line width increases in the same way as
the image pixels.
If deactivated, the size relates to the monitor pixel size. That means
when zooming into the image, the line width e.g. does not change.
Line Here you change the line color, width and the line style (none, solid,
dashed and dotted).
Text Here you select the text font, color, size and style by selecting the ap-
propriate options. Also select the desired horizontal and vertical align-
ment from the dropdown list.
Reading direction is only active for measurement annotations of the
Line element.One direction aligns the text annotation to the edge of
the image. Two directions aligns it parallel to the line itself.
Fill Here you can adjust, if the background of the annotation should be
filled or not. Several filling options are available.
Opacity Here you can adjust the degree of opacity of the graphic element in
percent. 100% makes the graphical element completely opaque (cov-
ering the underlying image pixels), while 0% makes it completely
transparent.
Annotation Here you can change the selected annotation and add the unit.
Set as new global If you click on this button the formatting style of the graphic elements
default currently selected in the image is set to the new default.
Reset If you click on this button the formatting style of the currently se-
lected graphic element is reset to factory default.
See also
2 Graphics Tab [} 1036]
You can change the following properties of a graphical element or an interactive measurement:
Copying/pasting an element:
Parameter Description
Use this to select the graphic elements in the image area. If you are
currently in another mode, you can switch back to the Selection
Select
mode using this button.
Use this to create an identical copy of the last graphic element drawn
in by simply clicking anywhere into the image area. To exit this mode,
Clone
either switch back to the Selection mode or press the ESC key.
Customize Opens the Customize Tools dialog. In this dialog you can add up to
35 graphic elements which are organized in up to 5 tool bars. You
can make changes to their formatting style and also define which
measurement values they should show.
Keep Tool Activated: Keeps the selected tool active. This allows you to draw in
a number of the same elements one after the other.
This dialog is called up via the Custom Graphics view option tab.
Parameter Description
User Toolbar This list shows all custom graphic elements added to the Custom
Graphics tab. Select a tool and double click on it’s icon to open the
Format New Custom Graphic Tool dialog, see here [} 1040].
You can rearrange the order of elements by using the Up or Down
arrows at the bottom edge of the list. To delete a selected element
use the Delete icon at the bottom edge of the list.
Tools This list contains all graphical elements available. If you select an ele-
ment and double click on it, it will be added as a new entry to the top
row of the User Toolbar list. Alternatively the Button with the Plus
symbol at the bottom edge of the list can be used.
Frequent Annota- This list contains all frequent annotations available. Frequent annota-
tions tions are aligned rectangle elements preconfigured to show image
metadata such as acquisition time or focus position used during ac-
quisition.
If you select an element and double click on it, it will be added as a
new entry to the top row of the User Toolbar list. When finished,
click the Close button. The newly added elements are now shown in
the Custom Graphics tab.
Here you can adjust the image display. This function is particularly important if you want to dis-
play images with a very high dynamic range on the screen.
The histogram shows the brightness distribution of the pixels that are present from all channels si-
multaneously. The y-axis represents the relative frequency and the x-axis indicates the brightness.
A curve showing the corresponding distribution, the so-called display characteristic curve, is dis-
played for each channel.
Parameter Description
Only visible for multichannel images.
Here you can select for which channel you want to adjust the display
Channel Selection on the screen. To select all channels, click on the All button. To select
a certain channel, click on the corresponding channel field. Hovering
the mouse pointer over a color field displays the relevant channel
name.
If the image consists of more than 29 channels, a scrollbar will be dis-
played which you can use to switch to the desired channel.
Spline Mode Clicking on this button allows you to add up to 8 points to the dis-
play characteristic curve.
You can then bend the curve around these points. To do this, click on
the desired section of the display curve and move it as required. Click-
ing on the display curve again adds another point.
You can delete points by moving them along the display curve until
they lie on top of another point. In this way, even in difficult situa-
tions you can adjust the display curve so that all important image re-
gions can be displayed well.
Min/Max Adjusts the display characteristic curve so that the darkest pixel is
black and the brightest pixel is white in the display. Note that this
function makes an approximation to get a quick result! For detailed
information about minimum and maximum values refer to measure-
ments conducted in the Histo view.
Best Fit Adjusts the display characteristic curve so that 0.1% of the darkest
pixels contained in the image are black and 0.1% of the brightest pix-
els are white in the display.
Input fields With the two input fields to the right of the BestFit button you can
adjust the black/white values from 0.1% to values from 0 to 90% ac-
cording to your requirements.
Parameter Description
Dimension Selec- If your images contain time series, z-stacks or both, you can here se-
tion lect the aspect of an image for which the display settings should be
applied.
Note that with all settings other than Current there may be several
seconds of calculation time until the setting is applied, depending on
the number of time points/ z-planes.
The following options are available:
- Current Adjusts the display for the current image and keeps this setting for all
other time points or Z-planes.
- All T Collects the intensity values from all time points and adjusts the dis-
play according to the brightest and darkest pixels within the entire
time series.
- All Z Collects the intensity values from all Z-planes and adjusts the display
according to the brightest and darkest pixels within the entire Z-stack.
- All T+Z Collects the intensity values from all Z-planes and time points and ad-
justs the display according to the brightest and darkest pixels within
the entire Z+T series.
Here you can copy display settings to the clipboard, insert them into
other images from there or save and reload settings. This allows you
Options
to apply identical display settings to several images in order to pro-
duce comparable display conditions.
Black Displays the gray value currently set up to which all pixels are shown
as black. You can also enter a certain value here.
Gamma Displays the gamma value currently set. You can also enter a certain
value here.
0.45 Sets a gamma value of 0.45. This is the recommended setting for
most color images.
1.0 Sets a linear display characteristic curve with a gamma value of 1.0.
White Displays the gray value currently set from which all pixels are shown
as white. You can also enter a certain value here.
Parameter Description
Display Mode Selects which image is displayed in the image view.
Auto Filter Sets the processing strength automatically depending on the image
properties.
Parameter Description
Create Processed Saves the processed images.
Image(s)
For images acquired with Airyscan, this tab offers you different image view options. During acqui-
sition of Airyscan 2 Multiplex mode images, the result can be displayed and controlled for correct
acquisition quality with this tab. Note: If you use multi-immersion objectives, the immersion
medium needs to be indicated with the MTB software to allow for correct processing of Airyscan
images.
Parameter Description
Display Mode Selects how the image is displayed in the image view. Note that the
image that is generated in this way is only temporary; the data is still
raw data and is processed anew each time it is viewed.
Process In the dropdown list you can select the image(s) and dimension you
want to process and open as a new image. Available options:
§ Current Image (2D)
§ Current Image (3D)
§ All Images (2D)
Create Processed Processes the active images and opens them as a new image docu-
Image(s) ment.
Parameter Description
Display Mode Apotome images are acquired as raw data. The Display Mode sets
how the image is calculated and displayed.
– Raw Data Displays the raw data as output image and disables all other parame-
ters of the function.
Displays the Phases slider on the Dimensions tab. This enables you
to select the various phases of the raw images.
Normalization Activated: The resulting images always fill the entire 16 bit dynamic
range of the image histogram, see normalization .
Enable Correction Activated: Displays the Correction dropdown list for stripe artifact
correction.
– Phase Errors Corrects phase errors in the image without additional bleaching cor-
rection.
– Local Bleach- Corrects the bleaching for each pixel individually (default setting). This
ing is usually the best method.
Phase Correction Only visible if Enable Correction is activated and if you have selected
one of the two bleaching corrections for Correction.
Activated: Performs a correction of any phase deviations present in
addition to the selected bleaching correction.
Grid Displays the grid frequency used for the image in lines/mm.
Parameter Description
– Refractive In- Sets the refractive index of the used embedding medium .
dex Embed-
ding
– Distance to Sets the distance of the acquired structure from the side of the cover
Coverslip slip facing the embedding medium.
– Fast Iterative Uses an algorithm based on Deconvolution methods for structured il-
lumination microscopy, with some enhancements as described in the
technical note. It is faster and less memory intensive, and also purely
using the image formation model.
Maximum Itera- Visible for Fast Iterative and Constrained Iterative algorithms.
tions Sets the maximum permitted number of desired iterations. In the case
of Richardson-Lucy, you should allow significantly more iterations
here.
– SIM Correc- Activated: Removes stripe artefacts created by image acquisition and
tion corrects for false phases in metadata.
Create Image Creates a new image document. The available settings are taken into
consideration here.
In most image views you will see the PSF Display tab as soon as a PSF image has been loaded.
PSF images differ from the data types of normal images. They are saved, for example, in the high-
precision floating point format. A series of important values that allow conclusions to be drawn
about the microscope system and sample conditions can also be read from PSF images.
§ Intensity PSF: The PSF is displayed in the position space, gray values are displayed in floating
point format.
§ Intensity OTF: The optical transfer function (OTF) displays the 3D PSF in the frequency space
following a 3D Fourier transformation. Gray values are displayed in floating point format.
§ Intensity Slice OTF: Displays the 2D Fourier transformation of each individual Z-plane.
Axial cut view checkbox
Activated: Displays the PSF in axial section view.
Deactivated: A slider for Z appears on the Dimensions tab. This allows you to move through the
various Z-planes.
Info
Please note that the resolution values for measured PSFs show the performance of the entire
system, consisting of all optical and electronic components. The sample, with its optical prop-
erties and possible aberrations, therefore has a significant impact on the resolution. This
means that these values are not suitable for making statements about the quality of the objec-
tive.
In this mode the image will be displayed in the full monitor size.
To start the full screen mode, position the cursor on the image area and open the context menu
via right mouse click. Click on Full Screen. You can also press F11 or click on menu Window >
Full screen as an alternative.
Toolbar
In the toolbar at the bottom you find several buttons for general and image specific functions, like
zoom function (Zoom button) or image informations (Info button). When you open a multidi-
mensional image, you find buttons for specific functions, etc. Z-Stack, Channels. To open the
functions, click on the button.
Previous button
Displays the previous document in full screen mode. You can page step by step backwards
through all open documents.
Next button
Displays the next document in full screen mode. You can page step by step forwards through all
open documents.
Exit Fullscr. button
Closes the full screen mode.
To open the Splitter mode click on the splitter mode icon in the document bar [} 28].
In this mode you can generate a multi image of one or several images in order to compare them.
Drag an image of the Images and Documents gallery in Right Tool Area and drop it in a split-
ter position. The standard setting for the splitter are 2 columns and 1 row. You can modify these
setting in the Split Display tab.
Proceed similarly with further images to be displayed in the multi image. The same image can be
dropped several times in the splitter view, i.e. to compare different image scenes.
The multi image can be saved as CZSPL (Zeiss Multi Image Files) image type in the menu File >
Save As. The stored multi image is no image document, but rather a reference of the images dis-
played in the splitter mode.
Use the Split Display tab for further adjustments (i.e. arrangement) of the splitter mode. Here
you can create a single image of the multi image to be saved as CZI image file.
Arrangement section
Here you can set how much columns and rows the splitter image should have. Therefore simply
enter the desired number in the Columns/Rows input fields.
Parameter Description
Arrangement
Parameter Description
– Columns Sets the number of columns displayed in the splitter view mode.
– Rows Sets the number of rows displayed in the splitter view mode.
Synchronize Di- Activated: The settings of the Dimensions tab (i.e. Zoom) will be ap-
mensions plied synchronously to all images in splitter mode. Does not apply to
rotations.
Synchronize Dis- Activated: The settings of the Display tab (i.e. Gamma) will be ap-
play plied synchronously to all images in splitter mode.
Show Position Activated: The cursor changes to an arrow symbol and a cross
Data marker in the image. The X/Ycoordinates with scaling unit and gray
value of the current cursor position are displayed below the image.
Furthermore additional information is displayed for multidimensional
images: i.e. the gray value for each channel of a multichannel image,
the time of each time point of a time lapse image, the focus position
of each z-position of a z-stack.
Reset By clicking on this button you can reset all adjustments applied to the
images in splitter view.
New Image From Here you can select the type of image to be generated. The available
options are depending on the dimensions of the displayed image.
- Current View Creates a two-dimensional image of all opened images visible in the
splitter mode.
- Nearest The output pixel is given the gray value of the input pixel that is clos-
Neighbor est to it.
- Linear The output pixel is given the gray value resulting from the linear com-
bination of the input pixels closest to it.
- Cubic The output pixel is given the gray value resulting from a polynomial
function of the input pixels closest to it.
Parameter Description
Spacing color Only visible if the Show All mode is activated.
Changes the background color of the splitter image.
The file browser is accessed via File > Open File Browser. You see an overview of all files (im-
ages, documents, etc.) stored on the computer. In the left column you see a file structure which is
associated with the common image or data containing folders in your file system (Images and
documents). In the right area you see the preview to the selected folder.
Info
The ZEN folders contain automatically the Auto Save folder. Here you see all auto-saved im-
ages from ZEN. Set Auto Save path in Tools > Options > Saving.
Gallery View
Here you see all files of a folder as small preview images (thumbnails). Use Tool tab to adjust pre-
view images size, sorting, etc..
Info View
Here you see a detailed list with all data the selected image contains. Find a detailed description
of all possible data under Info View [} 1010].
Table View
Here you see all files of a folder well-arranged in a table. This view is perfect for folders which
contain many files.
Parameter Description
Icon size Set size of thumbnail images here.
Text Rows Select entries which you want to have displayed as additional text row
under the thumbnail image.
Record Switch from file to file in the selected folder by using the slider.
Sorting Arrange your files to certain properties (i.e. file name, type, etc.)
Folders Manage selected file folders here (i.e. new folder, rename folder,
etc.).
Selection Manage selected files here (i.e. copy file, or delete file).
10.1.1 Introduction
In the following chapters you will learn how to calibrate the ApoTome for a two-channel experi-
ment and acquire a two-channel image. This image will be used as a basis for demonstrating the
processing options. After this, a Z-Stack image will be acquired and processed with the help of
ApoTome deconvolution.
Phase calibration, if it has not yet been performed, is carried out from the Locate tab, while the
other steps are all performed from the Acquisition tab.
Grid Focus Calibration is an important step. It is best to perform this using the sample that you
will want to acquire later, to guarantee identical optical conditions. If your sample is prone to sig-
nificant bleaching, you can also use the calibration slide provided.
Background information on the ApoTome can be found here:
The optics of a microscope are optimized for analyzing very thin samples. For a cover-slip-cor-
rected objective, all optical calculations are performed for very thin objects that lie directly be-
neath the cover slip. All cover-slip-corrected objectives from ZEISS are optimized for this particular
usage, and exhibit an optimum point spread function (PSF) for the wavelengths for which the cor-
responding objective has been specified.
In biological applications, however, the vast majority of specimens used do not satisfy these opti-
mum requirements. Sometimes thicker biological tissue slices are used, e.g. to analyze cells in the
tissue using specific fluorescent markers.
In such cases, during microscopic analysis, and particularly during documentation, the set focal
plane is hidden by parts of the image that originate from above and below the actual focal plane.
As a result the image appears "faded", the contrast is reduced, and the background becomes
bright. In extreme cases important structures and image details may be completely hidden.
Optical section
The three raw images are combined online on the PC and displayed as an optical section. This
combined resulting image is an optical section through the sample with the following properties:
§ The grid structure has been removed from the raw images.
§ The parts of the image that are out of focus are no longer visible.
§ The sharpness and contrast of the image have been increased.
§ The image’s resolution in the axial direction has been increased.
You will now see the Phase slider on the Dimensions tab. Here you can locate the individual grid
positions. This view can be useful when looking for errors, e.g. to find out where residual streaks
in the processed resulting image originate from.
The following requirements should be met in order to produce optimum ApoTome images:
§ Exposure time of the camera: this should be set so that approx. 80% of the camera's dy-
namic range is used. The smaller the dynamic range of the images of the lines, the more noise
the combined resulting images will contain.
§ Correct calibration: Good results can only be achieved if calibration has been performed
correctly. Ideally you should use your own sample for calibration. If this does not lead to good
results, use the calibration slide provided.
§ Sufficient grid contrast in the sample: Good section image results can only be achieved if
the grid lines in the live image can be clearly identified in all object areas. Under certain cir-
cumstances samples with very homogeneous staining throughout may not be suitable for
ApoTome images.
§ Avoid vibrations during acquisition, as any movement of the grid position during acquisition
can lead to streak artifacts.
§ Number of phases: Although 3 grid positions (also called phases) can completely cover the
object structures that are in focus and are therefore sufficient for creating optical sections, the
results are significantly better when 5 or more phases are acquired. For this reason 5 phases
are acquired as standard.
§ Selection of the correct grid frequency: Under normal circumstances, the automatic grid
selection yields the best results. In the case of difficult samples, e.g. if the staining is weak, se-
lecting a different grid manually can lead to better results.
§ Avoid electronic interference: The ApoTome's scanner unit is equipped with highly precise
control. Avoid electrical interference, e.g. leaving cell phones close to the ApoTome, to pre-
vent incorrect positioning of the grid.
The following list gives an overview about the recommended objectives and the compatibility with
several microscopes. The links give detailed information about the features of each listed objec-
tive.
Overview
§ EC Plan-Neofluar (Axio Observer/Axio Zoom.V16) EC Plan-Neofluar [} 1059]
§ LCI Plan-Neofluar (Axio Observer/Axio Zoom.V16) LCI Plan-Neofluar [} 1060]
§ Plan-Apochromat (Axio Observer/Axio Zoom.V16) Plan-Apochromat [} 1060]
§ LD LCIPlan-Apochromat (Axio Observer) LD LCIPlan-Apochromat [} 1061]
§ C-Apochromat (Axio Observer) CApochromat [} 1061]
§ LD C-Apochromat (Axio Observer) LD CApochromat [} 1061]
§ A Plan-Apochromat (Axio Observer) a Plan-Apochromat [} 1061]
§ A Plan-Fluar (Axio Observer) a Plan-Fluar [} 1062]
Microscopes
§ Objectives for Axio Observer
§ Objectives for Axio Zoom.V16
10.1.4.1 EC Plan-Neofluar
10.1.4.3 Plan-Apochromat
10.1.4.4 LD LCIPlan-Apochromat
10.1.4.5 CApochromat
10.1.4.6 LD CApochromat
10.1.4.7 a Plan-Apochromat
10.1.4.8 a Plan-Fluar
Before you can use the ApoTome for your experiments, the optimum angle of deflection of the
scanner unit must be set on the ApoTome 2. This fine adjustment of the scanner unit only has to
be performed once after the system has been set up. The mirror slide and special reflected light
reflector cube, both of which are supplied with the ApoTome 2, are used for this purpose.
Calibration only needs to be performed for one grid and one objective. It is advisable to calibrate
the grid for the low magnification range (grid marked with an "L" for "Low magnification") using
a 20x objective.
The positioning of the camera is also optimized in the dialog for phase calibration. To achieve op-
timum performance, the camera horizontal should be aligned parallel to the ApoTome 2 grid lines
with as much precision as possible.
The calibration process is supported by a wizard. Start the function by selecting the ApoTome
Phase Wizard function from the Acquisition menu.
The wizard guides you through the calibration process in 5 steps. Follow the instructions in the
text field of the wizard.
For phase calibration you will need the mirror slide provided and the calibration filter
(424930-9902-000). The filter is designed for use with white light sources such as the HXP120C
and cannot be used with the LED light source Colibri. If your ApoTome system was ordered and
supplied exclusively with Colibri, a suitable calibration filter (424930-9000-000) has already been
provided. If Colibri has been retrofitted and the calibration filter is not available, it is not possible
to perform phase calibration. In this case please contact your ZEISS sales representative.
Aim
In this step we will set up a two-channel experiment. To do this, we will use the Smart Setup
function. The ApoTome is in the first click-stop position, i.e. in the empty position without a grid.
1. Place your sample onto the microscope stage, localize it with the help of the functions on
the Locate tab and bring it into focus.
2. Now go to the Acquisition tab and create a new experiment in the Experiment Man-
ager:
4. Select the appropriate dyes (in our case Alexa 488 and Alexa 568):
6.
You have successfully set up the channels. In the live mode you can now check the focus and ex-
posure time for the two channels.
Aim
In this step you will calibrate the focus position of the ApoTome grid for the selected channels.
This step is essential, as without a valid calibration it is not possible to perform an ApoTome ex-
periment. Provided that no changes are made to the device settings that are important for calibra-
tion (objective, filter and illumination source, camera) the calibration remains valid for future ex-
periments. We nevertheless recommend that you repeat the calibration from time to time, espe-
cially if the sample type you are analyzing changes. Calibration takes place in a wizard, which
guides you through 3 simple steps.
1. Move the ApoTome to the second click-stop position so the grid is positioned in the beam
path.
2. From the Acquisition menu item open the ApoTome Focus Calibration Wizard … en-
try.
3. In the first step select the channel for which you want to perform calibration.
5. On the left you have the option of adjusting the exposure time and, if necessary, the illumi-
nation intensity (with corresponding light sources only).
6. To adjust the exposure time correctly, select the saturation display on the Dimensions tab.
Regions overlaid in red indicate that pixels are saturated.
7. Reduce the exposure time accordingly. The ideal situation is where approx. 70% of the his-
togram is filled for the brightest region of the sample.
8. Position the rectangle in the live image and adjust its size in such a way that it covers fairly
homogeneous fluorescent structures and does not lie over the background. The grid focus
is only determined within this rectangle.
11.
à More precise grid focusing can be achieved if you click on the Start Local Scan button.
However, this is only recommended for samples that are not particularly prone to bleach-
ing. For samples prone to significant bleaching, the results of the Local Scan would
measure considerably lower intensities and distort the result.
12. Click Finish.
13. To perform calibration for another channel, answer Yes to the question "Do you want to
calibrate another channel?" in the dialog that is now displayed.
à The wizard then begins again from Step 1.
14. As soon as you have calibrated all channels, exit the wizard by clicking on No.
You have successfully performed grid focus calibration. Now continue with the next step.
Aim
In this step you will perform acquisition for a two-channel experiment. You will use the same
channels that were set up in Step 1. The objective must also be the same one used to calibrate
the grid focus.
3. Adjust the exposure time for the two channels in such a way that approx. 70% of the His-
togram is used.
Aim
ApoTome images that are acquired from the Acquisition tab always take the form of raw data.
In this step, with the help of the image you acquired in Step 3, we will look at the various display
options available for ApoTome raw images. We will also create a processed resulting image,
which you can process further as required.
1. In the Center Screen Area go to the ApoTome tab. This view option is only displayed for
ApoTome raw images.
à If the Show all mode is deactivated you will see two view options: the Display Mode
settings & the Create Image button.
The default selection for Display Mode is the Optical Section view.
2. Select the Conventional Fluorescence option from the Display Mode dropdown list.
à The image is now no longer displayed as an optical section, but as a conventional fluo-
rescence image ("widefield"):
3. Activate the Show All mode to see additional settings for the calculation of the optical sec-
tion image.
4. To see the difference between a corrected and uncorrected image, deactivate the Enable
Correction option. Detailed information on the individual options can be found in the on-
line help.
5. If you have not yet done so, enable the correction using the Local Bleaching option.
6. Create a new, processed resulting image by clicking on the Create Image button.
You have successfully processed the ApoTome image and created a resulting image for further
processing.
Aim
In this step you will acquire a Z-stack image with the same channel settings as in Step 3.
1. On the Acquisition tab activate the Z-Stack acquisition dimension in the Experiment Man-
ager.
3. Start the Live Mode and define the dimensions of the Z-stack for your specimen. To cap-
ture the entire object three-dimensionally, you should set the upper and lower limit in such
a way that object structures can no longer be seen in focus. Set the interval between the
individual Z-planes using the Optimal button:
You have successfully acquired a Z-stack image. Save the resulting image under a meaningful
name via File > Save (Ctrl+S).
Aim
In this step ApoTome deconvolution will be performed for the Z-stack acquired in Step 5. This
enables you to significantly enhance the image, beyond what is possible using the normal Apo-
Tome processing functions.
Prerequisite ü The Z-stack image must be in the foreground and in the 2D view. Go to the ApoTome tab
(view option). Make sure that the Show All mode has been activated.
1. Activate the Deconvolution checkbox.
à Make sure that the Set Strength Manually option is also activated. The Strength slider
is set to Medium by default. Retain this setting for the time being.
2. Click on the Apply deconvolution button.
à Depending on the image size and the specifications of the computer, the processing can
take anything between a few seconds and a few minutes. Make sure that you also adjust
the brightness and contrast using the settings on the Display tab (tip: try out the Min/
Max button).
3. Examine the result by navigating through the Z-stack using the Z-Position slider on the Di-
mensions tab.
4. Change the Strength setting on the ApoTome tab until there is an obvious improvement
and there is no disruptive background noise.
5. To obtain the result as a separate image document, click on the Create image button.
6. Using the Splitter Mode [} 1051] you can now compare the resulting image with the wide-
field and the ApoTome processed version.
You have successfully performed ApoTome deconvolution, created a resulting image and com-
pared different resulting images.
See also
2 Apotome Tab [} 1046]
10.1.12 Reference
Parameter Description
Recommended In this section you can set the grid with which you want the ApoTome
Grid to be operated.
Calibration Status Here you can see whether your ApoTome has been calibrated suc-
cessfully or whether calibration needs to be performed.
Theoretical Thick- The theoretical section thickness for the selected filter set and the ob-
ness jective used is displayed here.
Parameter Description
Camera Here you can select the camera you wish to use to acquire your Apo-
Tome images. As soon as you have selected a camera, ApoTome im-
ages are generated automatically during acquisition (Snap). The se-
lected camera also applies to the Acquisition tab.
Live Mode Here you can choose between the No Combination, Optical Sec-
tion and Conventional Fluorescence modes for the live image.
Acquisition Mode Here you can choose between the No Combination, Optical Sec-
tion and Conventional Fluorescence modes for acquired images.
Phase Images Here you can choose between no fewer than 3 and no more than 15
phases. Each phase corresponds to a grid position. By default, 5
phases are acquired.
Filter Here you can set a filter which can be used to filter out residual
streaks from the image. You have a choice between no filtering (Off)
and three strength levels.
Image Normaliza- Activated: The gray values are extended to the maximum available
tion dynamic range following the calculation, see Normalization .
Prerequisite ü You have started ZEN and the microscope is operational with the Auto Immersion device
completely installed and configured.
ü The immersion fluid level in the liquid reservoir of the Auto Immersion device is Sufficient.
ü For a robust auto immersion performance, use a default setting with 50% for stage speed
and 50% for stage acceleration. The settings for the stage depend on the used objective and
the distances to be traveled. Therefore, these settings can be optimized by the user for differ-
ent applications (speed vs. robustness).
1. Focus on the sample with a low NA overview objective, e.g. 5x or 10x objective.
2. Set up your acquisition experiment and select the immersion objective in the Microscope
tool in the Right Tool Area. Alternatively, select the immersion objective with the Micro-
scope Control tool on the Locate tab. In both cases the objective is illustrated with a little
water drop icon.
à The image gets blurred.
3. In the Microscope tool in the Right Tool Area, click Create.
à The set amount of immersion fluid is applied to the objective. The image gets sharp, refo-
cusing the sample may be required.
4. Start your experiment.
5. If the immersion gets too low during the experiment, go to the Microscope tool and click
Renew.
à The immersion is renewed.
See also
2 Microscope Tool [} 1079]
2 Auto Immersion Tool [} 1080]
Prerequisite ü You have started ZEN and the microscope is operational with the Auto Immersion device
completely installed and configured.
ü The immersion fluid level in the liquid reservoir of the Auto Immersion device is Sufficient.
ü For a robust auto immersion performance, use a default setting with 50% for stage speed
and 50% for stage acceleration. The settings for the stage depend on the used objective and
the distances to be traveled. Therefore, these settings can be optimized by the user for differ-
ent applications (speed vs. robustness).
1. On the Acquisition tab, activate Auto Immersion.
à Auto immersion is activated and the tool is displayed on the Acquisition tab.
2. Focus on the sample with a low NA overview objective, e.g. 5x or 10x objective.
3. Set up your acquisition experiment and select the immersion objective in the Microscope
tool in the Right Tool Area. Alternatively, select the immersion objective with the Micro-
scope Control tool on the Locate tab. In both cases the objective is illustrated with a little
water drop icon.
4. In the Microscope tool in the Right Tool Area, click Create.
à The immersion is created by adding the immersion fluid to the objective. Alternatively, if
the oil stop is activated, the auto immersion objective stays in the load position and a
pop up window is displayed asking to apply immersion or clean the objective/specimen.
By clicking Continue and Create Immersion, the selected immersion objective is going
into the work position and the immersion fluid is applied automatically.
à After clicking Create, the settings for time interval and travel distance are used for auto-
mated immersion renewal. During Live, Continuous, or an experiment the immersion
will be renewed now when the dedicated auto immersion objective is used.
à Only clicking Create in ZEN (in the Microscope tool) or Continue and Create Immer-
sion in the oil stop message for immersion change initiates the auto immersion renewal
with the global timer.
5. To configure the automatic renewal of the immersion, go to the Auto Immersion tool on
the Acquisition tab and set a time interval (in minutes) and a stage travel distance (in mm)
after which the immersion should automatically be renewed. Depending on the ventilation,
air dryer performance and the air condition, this can vary a lot between different labs. For
information about parameter values suitable for different environmental conditions, see
Settings for Automatic Immersion Renewal [} 1079].
6. Start your experiment, or a live image or continuous image acquisition.
Prerequisite ü You have started ZEN and the microscope is operational with the Auto Immersion device
completely installed and configured.
ü Stand and software are switched on and in operation mode.
1. On the TFT, select Home > Settings > Extras > Immersion.
2. Press the button below Nosepiece position with Autoimmersion.
à A window opens.
3. Use the arrow keys to select the objective position with the auto immersion setup that will
be used for auto immersion experiments.
4. Click Save, otherwise the selected objective with auto immersion is not changed.
à The selected objective is marked with a black circle on the TFT.
5. To activate the auto immersion also in the software, restart it.
à The auto immersion position is visualized by a drop within the objective icon.
à The auto immersion position can also be changed under the auto immersion module op-
tion in the MTB.
Depending on ventilation, air dryer performance, humidity and the air condition, the settings for
auto immersion and the automatic renewal of the immersion can vary a lot. The following table
provides an overview what values might be suitable for the parameters. Note that these values
only serve as orientation.
Parameter Description
Objective List Selects the objectives and pre-magnification. The color bar on the ob-
jective buttons indicates the color for the respective stage limit indica-
tor inside the Navigation tab.
If you select autocorr objectives (motorized correction collar) you can
additionally adjust the relevant settings like Correction Mode, Bot-
tom Thickness or Imaging Depth.
If you have an auto immersion objective available, this objective is il-
lustrated with a little water drop icon next to the magnification.
Tank Level Displays the tank level of the auto immersion tank.
§ Sufficient (green): More than 25% of immersion fluid remains.
§ Low (red): Less than 25% immersion fluid remain, you have to refill
it.
Create Creates the first time immersion by adding the immersion fluid to the
objective and starts the global timer and distance measurement for
automated renewal of the immersion fluid..
See also
2 Establishing and Renewing Immersion Manually [} 1077]
2 Establishing Immersion and Renewing it Automatically [} 1078]
Parameter Description
Renew Every Sets the time interval in minutes (10 to 120 minutes) and/or the travel
distance (200 mm to 1000 mm) of the stage in mm, after which the
immersion is automatically renewed.
See also
2 Establishing Immersion and Renewing it Automatically [} 1078]
2 Settings for Automatic Immersion Renewal [} 1079]
Info
For additional information and detailed descriptions, refer to further applicable documents or
ask your ZEISS Sales & Service Partner.
10.3 Axioscan
10.3.1 Introduction
The general acquisition workflow for ZEN slidescan centers around the following two main ac-
tions:
1. Setting up your scan profile(s), see Setting Up Scan Profiles [} 1082], including the advanced
setup of the profiles, see Working in the Advanced Scan Profile Editor [} 1083].
2. Starting your actual scan based on the created profiles, see Starting Your Scan Experiment
[} 1097].
The following steps will show you how to set up brightfield, fluorescence or polarization scan
profiles. You can set up a basic profile with the Smart Profile Selection [} 1118] wizard and refine
it in the Advanced Scan Profile Editor [} 1120], or you can set up your profile directly with the
Advanced Scan Profile Editor [} 1120]. In both cases a software wizard guides you through the
necessary steps required to create scan profiles. To navigate between the steps you can use the
Next and Back buttons. In the case of the Advanced Scan Profile Editor, you can also directly
click on a specific step without the need to go through all steps all the time. If you click Cancel,
all changes you made will be discarded. When you click Finish, all changes made are stored in the
scan profile.
Info
Advanced Editor
The editor can be switched between basic mode and Show all mode . In most cases the basic
mode should be sufficient for setting up scan profiles. It is easier to use as it only contains the
most important controls. If you are missing an option, activate Show all to see all options.
Info
Profiles in Smart Profile Selection
Any profile created in the Smart Profile Selection is derived from a pool of default profiles
which are installed on the system. The templates themselves cannot be changed by the user,
but the resulting user profiles can of course be further adapted. User profiles are stored in the
ZEN user data folder: C:\Users\<username>\Documents\Carl Zeiss\ZEN\Documents\Scan Pro-
files\v5
The Advanced Scan Profile Editor [} 1120] automatically opens to give you the opportunity to fur-
ther adapt this profile to your specific needs. For more information, see Working in the Ad-
vanced Scan Profile Editor [} 1083].
10.3.2.1.2 Setting Up a New Profile Without the Smart Profile Selection Wizard
1. At the top of the Magazine view, click and select Create New Scan Profile.
2. Enter the name of your new profile and click . Alternatively, enter the name and
press Enter. The name must have at least one character.
à The new profile is created and the Advanced Scan Profile Editor [} 1120] opens.
3. Set up your profile in this advanced editor. For detailed information, see Working in the
Advanced Scan Profile Editor [} 1083].
In general, the work in the Advanced Scan Profile Editor includes the following work/steps:
1. In the beginning, you get an Overview [} 1120] of your current profile, see Starting the
Setup in the Advanced Scan Profile Editor [} 1083].
2. In the Label [} 1120] step, you set up the scan of the label area of your slide, see Configur-
ing the Scan of the Label Area [} 1084].
3. In the Preview [} 1123] step, you set up the preview image acquisition, see Setting Up the
Preview Acquisition [} 1085].
4. If you selected the scan camera for your preview instead of the preview camera, you set it
up in the Pre-Scan [} 1124] step, see Setting Up a Pre-Scan [} 1088].
5. In the Sample Detection Settings [} 1126] step, you define the settings to detect the re-
gions in your sample that should be scanned, see Defining the Sample Detection Settings
[} 1089].
6. Then you create a coarse and a fine focus map in the Focus Map Settings [} 1133] steps,
see Adjusting Settings for Coarse Focus Map [} 1089] and Adjusting Settings for Fine Fo-
cus Map [} 1092].
7. In the Scan [} 1138] step, you finish the setup for the scan of your slides, see Adjusting
Scan Settings [} 1093].
Prerequisite ü You are in the Overview [} 1120] step of the Advanced Scan Profile Editor [} 1120].
ü You have given the new profile a unique name.
1. In the Description field, adapt the description of your profile, if necessary.
2. If you have started the editor either with a new or a default profile created by the Smart
Profile Selection wizard, select the Tray and Slide the profile should be based on and
click Load Slide. This step is optional, but is usually helpful as it enables you to set up the
scan parameters such as brightness and exposure time based on the samples that are
scanned later.
à You have selected the Tray and Slide the profile is based on.
3. Click Next.
à The Label [} 1120] step opens.
You have started the advanced setup of your scan profile and can continue with the Label [} 1120]
step. For more information, see Configuring the Scan of the Label Area [} 1084].
Prerequisite ü You are in the Label [} 1120] step of the Advanced Scan Profile Editor [} 1120].
1. Set up the acquisition for the scan of the label area and adjust the Exposure, if necessary.
à A live image is displayed in the center view of the wizard. The label is illuminated from
the top to create a reflected light appearance of the label.
2. If required, select the form of text/barcode recognition by activating OCR or Barcode re-
spectively. Note that OCR is only available if you have licensed the respective module.
à The text is displayed in the Recognized Text field below.
3. Click Next.
à The Preview [} 1123] step opens.
You have configured the scan of the label area and can continue with the Preview [} 1123] step.
For more information, see Setting Up the Preview Acquisition [} 1085].
If you have inserted 4”x3” slides, the Label step has some additional functions. This is based on
the fact that the preview image for the label is a montage of two images created with the pre-
view camera.
Info
Recognition and Label Orientation
In the mode for the 4”x3” slides the Recognition and Label Orientation expander is only
available in the Snap mode, thus the two stitched images are displayed.
5. If necessary, adjust the red frame that defines the acquisition region for the label.
You have set up the label scan for the special 4"x3" slides.
Prerequisite ü You are in the Preview [} 1123] step of the Advanced Scan Profile Editor [} 1120].
1. Adjust the red frame (region of interest) in the live image to define the area, where you ex-
pect the sample to be. For more information, see Setting Up the ROI for the Preview Im-
age [} 1086].
à A live image is displayed. The slide is illuminated from below to create a transmitted light
image of the sample.
2. If necessary, adapt the camera settings like the Exposure or White Balance.
à The display of the live image is adjusted accordingly.
An important adjustment to be made is the definition of the ROI (region of interest) for the pre-
view image of the specimen area. The system takes this region of interest and captures it. The au-
tomatic sample detection (if selected) is also applied to this region of interest.
2. Reposition the rectangle via drag and drop, if necessary. Avoid that the red frame covers
the label area of the slide.
3. To resize the rectangle, click on the respective handles and move it with pressed mouse
button, as you would resize any graphic element.
You have set up the ROI for the preview image.
Info
Optimal Size of ROI
For best results when doing bright field scans, the red frame should enclose the complete slide
and some space around it. Especially for the detection of an empty area on the slide, for the
automated shading correction on glass, it is important that the red frame is not set too small
(e.g. around the tissue only). Otherwise the sample could be considered as empty region. This
would result in a sup-optimal shading correction. If you want to limit the range for the sample
detection use the red frame in the Sample Detection Settings step.
Info
Adjust Display Settings
You have the possibility to adjust the display curve (see Display Tab [} 1043] on the bottom of
the wizard). The display settings are stored within in the profile. This has also an impact on the
display of the label in the Magazine View [} 1142] as well as how the specimen area is dis-
played in the Magazine view.
If you have inserted 4”x3” slides, the Preview step has some additional functions. This is based
on the fact that the preview image of the specimen area is created by a montage of two images
from the preview camera.
5. If necessary, adjust the red frame that defines the acquisition region for the scan.
You have set up the preview for the special 4"x3" slides.
Prerequisite ü In the Preview [} 1123] step, you have selected the Scan Cam because the sample is not visi-
ble in the preview image created by the preview camera due to low contrast.
ü You are in the Pre-Scan [} 1124] step of the Advanced Scan Profile Editor [} 1120].
1. In the Channels section, set up a channel for creating a pre-scan image. Select a contrast
method which is suitable to provide a good preview of the sample. This can be e.g. TIE or a
fluorescence channel.
2. Adjust the red frame to define the area where you expect the sample to be. Contrary to the
recommendation for bright field preview, select a size as small as possible to reduce scan
times.
3. To reduce pre-scan time, in the Acquisition Mode section (only visible if Show All is acti-
vated), select a high Binning and Gain as well as a short exposure time.
4. Click Find Focus to start the autofocus.
5. Adjust the exposure time and the illumination so that you can see the sample with a quality
which is sufficient to detect the sample in the Sample Detection step, either automatically
or by drawing in a scan ROI.
6. Click Start Prescan to start a pre-scan. Note: Since for creating a prescan the system does
not yet know where the sample is located, placing focus support points which lie within the
sample cannot be guaranteed. Five focus support points are placed in the sample, one in
each quadrant of the scan ROI and a fifth one in the center. Focus support points which do
not hit the sample will automatically be discarded.
7. If your sample is now correctly visible in the image, click Next.
à The step Sample Detection Settings [} 1126] opens.
You have set up your pre-scan and can continue with the setup on the Sample Detection Set-
tings [} 1126] step. For more information, see Defining the Sample Detection Settings [} 1089].
Prerequisite ü You are in the Sample Detection Settings [} 1126] step of the Advanced Scan Profile Editor
[} 1120].
1. Select the Sample Detection Processor. You have the choice to define a standard sample
detection, or you can select Custom Sample Detection to define a custom sample detec-
tion with the help of a image analysis and/or a macro.
2. In case of Custom Sample Detection, activate Use Image Analysis and/or Use Macro
and select the respective analysis setting or macro. If you activate both, the macro is exe-
cuted first and then the image analysis is performed to detect the sample.
3. In case of Standard sample detection, select the Sample Detection Mode. You have the
choice to set up the sample detection manually, interactively, or automatically. Interactive
means, that the sample detection is done automatically, but that the scan is interrupted to
allow checking the results with the Sample Detection wizard before continuing the scan.
4. If you have selected an automatic mode, select a Recognition Type and adjust the detec-
tion threshold using the histogram controls. For more information, see Sample Detection
Settings for Automatic and Interactive Mode [} 1127].
5. If you have selected the manual mode, draw your graphics into the image or define your
grid. For more information, see Sample Detection Settings for Manual Mode - Creating A
Grid [} 1131].
6. Click Next.
à The Coarse Focus Map [} 1133] step opens.
You have set up your sample detection settings and can continue with the setup on the Coarse
Focus Map [} 1133] step. For more information, see Adjusting Settings for Coarse Focus Map
[} 1089].
In the Coarse Focus Map step, the system tries to find roughly the focus position, where the
sample is located. This is in preparation for creating the fine focus map which is used during scan-
ning. Ideally, you should use the smallest magnification objective here (5x). Image quality at this
step is not important, it must just be good enough to guarantee good focusing.
To adjust the coarse focus map, follow this description. For general information about focus
maps, see Focus Maps [} 1108].
Prerequisite ü You are in the Coarse Focus Map [} 1133] step of the Advanced Scan Profile Editor [} 1120].
1. In the Light Path Components section, select an objective with low magnification.
2. In the Channels section, add a channel which is suitable for focusing, e.g. TIE is suitable for
most biological tissues. Select a channel that distributes the signal homogeneously in the
sample. Note that if more than one channel is added here, only the reference channel is
used.
3. Navigate to the specimen with the Stage tool. Alternatively, click inside the slide shown in
the Navigation tab and the software moves to the specimen at the selected position.
4. Click Live or Snap to check the (live) image.
à The live image is displayed.
5. Click Find Focus to start the autofocus.
6. Check the image and adjust the Exposure and focus settings, if necessary. The medium
setting is usually sufficient for coarse focusing. Note that for all transmitted light channels
utilizing the flash operation the search range setting is ignored as a full z-stack is always ac-
quired quickly.
7. Go to the Focus Point section.
à By default, Use Adaptive Focus Point Distribution is activated. For more information,
see Using Adaptive Focus Point Distribution [} 1091].
8. If Use Adaptive Focus Point Distribution is deactivated, see Adding or Modifying Focus
Point Distribution Strategies [} 1090].
9. To display and verify the focus points, click Verify Support Points. Note that this option is
only available if the editor has been opened for a particular slide with existing preview.
à The Focus Point View [} 1153] opens. For more information, see Working with Focus
Points [} 1096].
10. Select a Sharpness Measure Set. For detailed information, see Focus Map Settings
[} 1133].
11. Click Next.
à The Fine Focus Map [} 1133] step opens.
You have set up your coarse focus map and can continue with the setup on the Fine Focus Map
[} 1133] step. For more information, see Adjusting Settings for Fine Focus Map [} 1092].
See also
2 Focus Point Strategy Sets [} 1135]
Prerequisite ü You are in the Coarse Focus Map or the Fine Focus Map step of the Advanced Scan Pro-
file Editor [} 1120].
1. Click to add a new focus point distribution strategy. Alternatively, double click an ex-
isting entry to modify it.
à The Add focus point distribution strategy dialog opens.
2. Select the desired Strategy and set its individual parameters, if necessary. For information
on the individual strategies, see Focus Point Strategy Sets [} 1135].
3. Click Add.
à The strategy is added to the table and the dialog closes.
You have added or modified a focus point distribution strategy. You can continue the setup of the
remaining parameters, see Adjusting Settings for Coarse Focus Map [} 1089] or Adjusting Set-
tings for Fine Focus Map [} 1092].
This functionality is only available if Show all is activated. If samples on a slide have different
sizes, it is not possible to use only one focus point distribution strategy set. For smaller samples
like Tissue Micro Arrays the set Center of Gravity would be the best choice, for mid-sized speci-
mens Every Nth tile (with n = 3), and for larger objects the Onion skin.
To resolve this issue, ZEN offers a so-called Adaptive Focus Point Distribution. With this fea-
ture you can select a focus point distribution method based on the object size.
Prerequisite ü You are in the Advanced Scan Profile Editor [} 1120], in one of the Focus Maps steps
[} 1133].
ü You have activated Show All.
1. Activate the checkbox Use Adaptive Focus Point Distribution.
à The section Adaptive Focus Point Distribution is displayed.
2. In the Settings drop down list, select DefaultAFPDSetting. This creates five size-depen-
dent focus point distribution settings from 0 to 2500 mm2.
6. Select the start size of the object (the first should always start with 0). With the dropdown
menu you can select the desired Strategy. For information on the individual strategies, see
Focus Point Strategy Sets [} 1135]. Based on the selected strategy, the dialog displays other
possible settings (e.g. for Every Nth tile the value for N).
7. Click Add.
à The distribution strategy is added to the list.
8. Repeat the previous two steps until all your strategies are added, and then click Cancel to
close the dialog.
9. Click and select Save.
You have created a set of adaptive focus point distribution strategies and saved the setup. All the
added strategies are displayed in the list of the Adaptive Focus Point Distribution section.
NOTICE
Risk of damaging the objective
Too large focusing range could lead to damage of the objective and in extreme cases to dam-
age of the slide.
4 Based on the working distance of the objectives, the maximum focusing range must not
exceed 100 µm for a Plan-APOCHROMAT 40 x 0.95. For every other objective, a focus-
ing range below 500 µm is acceptable. This considers a typical cover slip thickness (170
µm) and a thickness of a specimen of around 10 µm. It is recommended to use a flash
based contrast method (e.g. TIE). In this case, since the flash modes are acquired very
quickly, it is not necessary to restrict the fine focus range to a very narrow range. The fine
focus map will be more reliable with a slightly larger focusing range.
This step works exactly as the Coarse Focus Map step. The difference to the Coarse Focus Map
is that here the focus map is created which is used during the scan. Here the same objective
should be used as in the final scan. To adjust the fine focus map, follow this description. For gen-
eral information about focus maps, see Focus Maps [} 1108].
Prerequisite ü You are in the Fine Focus Map [} 1133] step of the Advanced Scan Profile Editor [} 1120].
1. If you want to copy the channel settings from the Coarse Focus Map step, click Copy
Previous Setting.
à All channel settings of the Coarse Focus Map step are copied into this step. The 5x ob-
jective is exchanged with the 20x automatically. If no 20x objective exist, the smaller
magnification objective with the largest numerical aperture available on the system is se-
lected.
2. If you want to change the automatically selected objective, select the objective with which
you also want to perform the actual scan in the Light Path Components or in the Experi-
ment Settings section (depending on whether Show All is activated or not).
3. In the Channels section, add a channel which is suitable for focusing. Note that if more
than one channel is added here, only the reference channel is used for focusing.
4. Navigate to the specimen with the Stage tool. Alternatively, click inside the slide shown in
the Navigation tab and the software moves to the specimen at the selected position.
5. Click Live or Snap to check the (live) image.
à The live image is displayed.
6. Click Find Focus to start the autofocus.
7. Check the image and adjust the Exposure and focus settings, if necessary.
8. Go to the Focus Point section.
à By default, Use Adaptive Focus Point Distribution is activated. For more information,
see Using Adaptive Focus Point Distribution [} 1091].
9. If Use Adaptive Focus Point Distribution is deactivated, see Adding or Modifying Focus
Point Distribution Strategies [} 1090].
10. To display and verify the focus points, click Verify Support Points. Note that this option is
only available if the editor has been opened for a particular slide with existing preview.
à The Focus Point View [} 1153] opens. For more information, see Working with Focus
Points [} 1096].
11. Select a Sharpness Measure Set. For detailed information, see Focus Map Settings
[} 1133].
12. Click Next.
à The Scan [} 1138] step opens.
You have set up your fine focus map and can continue with the setup on the Scan [} 1138] step.
For more information, see Adjusting Scan Settings [} 1093].
See also
2 Focus Point Strategy Sets [} 1135]
Info
Camera
When adding a channel for a contrast method, the system automatically selects the suitable
camera, for example a monochrome camera for fluorescence or a color camera for brightfield
or polarization. When mixing brightfield and fluorescence channels, only one camera can be
selected for the scan setting.
Info
Scan Settings
You can select more channels than you actually need for the experiment to make the profile
more universal, and you can select and deselect single channels at a later stage.
The focus is defined by the fine focus map. The reference channel in the Channels section is
used just for focusing during setup of the scan step and also defines the stitching reference
channel.
Prerequisite ü You are in the Scan [} 1138] step of the Advanced Scan Profile Editor [} 1120].
1. If you want to take the settings from the Fine Focus Map step, click Copy Previous Set-
ting.
à All settings of the Fine Focus Map step are copied into this step.
2. In the Light Path Components or in the Experiment Settings section (depending on
whether Show All is activated or not), select the objective.
3. In the Channels section, add all required channels for your scan.
4. Navigate to the specimen with the Stage tool. Alternatively, click inside the slide shown in
the Navigation tab and the software moves to the specimen at the selected position.
5. Click Live to check the (live) image.
à The live image is displayed.
6. Check the image and adjust the Exposure settings, if necessary.
à For brightfield or polarization, the color camera is used and automatically set to 8/24 bit.
Try to fill the image histogram well, but avoid saturation.
à For TIE or Fluorescence, the monochrome camera is automatically chosen and is set to 14
bit. Try to set exposure and lamp brightness that the structures are well visible but avoid
unnecessarily long exposure times and saturation, especially for fluorescence.
7. Click Find Focus to use the autofocus with the selected reference channel.
à Focusing during the scan is done via the coarse and fine focus strategy. Focusing during
setup of the scan experiment is only used to help setting up the conditions of the experi-
ment and also to determine possible focus offsets between channels. This is why the ref-
erence channel used for focusing in this step can be different from the channel used in
the fine focus strategy.
8. Go to the section Z-Stack Configuration and set up a z-stack acquisition, if required. For
more information in this particular context, see also Determining the Z-Stack Configura-
tion [} 1094].
9. Go to the Online Processing section.
10. Define how the stitching is carried out.
11. Select Online, if stitching should be performed during the acquisition. The standard setting
is online stitching as this provides the best performance in terms of the processing time of
the slide.
12. If you selectOffline, the image is acquired and then stitched automatically after the acquisi-
tion is finished. This method can produce better stitching results over all, but the stitching
time is added to the total scan time.
13. If you select None, no stitching is applied at all. The image is unstitched and can be
stitched later within ZEN slidescan clinical or with another software. This option reduces
processing load for the system and can sometimes be helpful when acquiring very large im-
ages.
14. If the offline mode is selected, you can activate Fuse, so the system fuses the tiles after
stitching. This can sometimes reduce shading effects. However, due to limitations in the
maximum file size accepted by the operating system, images will still consist of tiles but in
this case without overlap.
15. Activate Pyramid active (should always be activated, as this will speed up the viewing of
the image afterwards).
16. Adjust the compression settings which are applied as part of the online processing.
à If JpegXR active is deactivated, the image is saved uncompressed. Be aware, that un-
compressed Axioscan images can be quite large files on disk and that data handling can
become a challenge. This is why lossy compression is always active by default.
17. If JpegXR active is activated, you can activate Lossless Compression so the image is only
compressed to the extent that no information is lost.
18. If Lossy Compression is activated, define the degree of Quality as the compression al-
ways involves a loss of information. In the case of polarization it may be sensible to select a
lower compression level (e.g. Lossless Compression), especially when acquired at 8/24 bit
depth as the standard lossy compression could result in visible compression artefacts in the
darker, low dynamic range image areas.
à You have set up the settings for scanning.
19. Click Finish.
à The editor closes and the profile is validated. For more information, see Profile Valida-
tion [} 1096].
You have set up and adjusted your scan profile with the advanced editor.
The section Z-Stack Configuration is important if you want to automatically acquire a z-stack
with or without extended depth of focus (EDF). Note that, depending on the objective, con-
denser, and camera, a 20x Plan-APOCHROMAT 0.8 has a depth of focus of around 1 µm and the
40x Plan-APOCHROMAT 0.95 has a depth of focus of around 0.5 µm. To scan any specimen
thicker than that with such objectives can pose a challenge. Acquiring a z-stack can in such cases
be helpful to record the entirety of the specimen in focus.
1. The best way to evaluate the optimal settings is to go to a representative region of the
specimen and focus through the specimen. Note down the focus values for when the sam-
ple gets into focus and out of focus again to determine the thickness. You can use the in-
formation displayed in the Focus tab at the bottom of the screen.
2. Click Reset to set the Measure Distance value to 0, e.g. at the focus position where the
sample first gets sharp. Then focus through the sample until your start losing focus again.
Note down the offset value. Using this it is easy to determine the thickness.
3. It is recommended to repeat this procedure for a few different regions of the specimen to
capture any variability in thickness.
4. Use the largest z-range, add 20% and use the result as the height of the z-stack. (Empirical
research has shown that it is recommended to add 20% to the z-range as the autofocus is
not always exactly in the middle of the specimen.)
5. Set the interval as desired. For brightfield, sampling according to the depth of focus of the
objective usually is sufficient. For fluorescence stacks which should be used for performing
3D deconvolution, setting an interval of half the axial resolution of the objective is recom-
mended. For the 20x Plan APOCHROMAT 0.8 objective, a interval of 0.5 µm would be suit-
able. However, this will result in a larger number of z-steps and a low scanning speed.
6. If you want to get a sharp image of a thick specimen and do not want to preserve the z-
stack, you can activate the extended depth of focus (EDF active).
à If active, the software uses the captured images in different focus planes for each image
field and combines the sharp information of all image planes and combines them into a
single sharp image of the whole sample
à It is strongly recommended to use the standard parameters, as these have been selected
for their optimal fit. For brightfield usually Variance is a good choice, while for fluores-
cence stacks maximum intensity projection (MIP) is working well.
See also
2 Manual Extended Depth of Focus Tool [} 247]
The software includes a profile validation. Thus, the system validates the profile if you leave the
advanced editor.
The profile validation makes sure that only meaningful profiles are created and also that the setup
of the profile does not lead to any risk to damage the system by the repetitive use of components,
which are not suitable for these high number of cycles. This includes things like making sure that
only one camera is used and that the condenser components are not unduly stressed by the se-
lected order in which images are acquired.
NOTICE
Risk of Premature Wear of Viluma 7
The intensive use of wavelengths 555nm and 590nm in parallel in one scan can result in pre-
mature wear of the Viluma 7. The bandpass switching component is not laid out for heavy
duty cycles. The profile validation will not prohibit the usage together in one scan, but will dis-
play a warning.
The software also validates the coarse and fine focus. It ensures that only one channel is activated
for each of the focuses and that the focus channel is activated as channel.
If the correction can be done without a user feedback, the change is applied and the user does
not get a message. For certain settings, the user is warned and can change the sub-optimal set-
tings, but can also accept it and the system finishes the profile setup. Critical settings, which can
harm the system, have to be changed before the user can finish the wizard.
NOTICE
Reduced Condenser Lifetime When Switching Components
The condenser of Axioscan 7 has several motorized components. Some of these components
are not designed to be switched constantly during an experiment. This is primarily relevant for
switching the modulation disk between channels, e.g. when acquiring cPol together with
brightfield the condenser lifetime would be quickly reached. The software therefore automati-
cally switches the order in which dimensions are acquired to the All tiles per channel mode. So,
instead of switching all channels for one tile, all tiles are acquired for one channel setting, the
condenser components switch and the second channel is acquired with all tiles. The resulting
images are then automatically aligned to each other. As a significant additional benefit of this
approach, the acquisition time is markedly reduced.
NOTICE
Severe Limitation of Condenser Lifetime in All Tiles Per Channel Mode
Forcing the system to operate the condenser in All Tiles Per Channel mode will severely limit
the lifetime of the condenser. This type of use is not intended. The software posts a warning
message every time this setting is wrongly set.
Prerequisite ü You have opened the Focus Point View [} 1153] in the Focus Map Settings [} 1133] steps of
the Advanced Scan Profile Editor [} 1120], or you have opened the Sample Detection wiz-
ard and have activated the functionality to verify focus points.
1. To get the live image at a particular position, double click in the overview image.
Prerequisite ü You have inserted all trays and slides that you want to scan.
ü You have set up the scan profile(s) for your specimens. For more information, see Setting Up
Scan Profiles [} 1082].
1. At the top of the Magazine view, select the scan profile you want to use for all slides with
status new.
à The scan profile is set for all slides with status new.
2. Activate all slides you want to scan.
3. If necessary, select or adapt the scan profile for each slide individually.
4. In the Naming Definition Tool [} 1148], select the naming definition for your images.
5. To create a preview scan first, click Start Preview Scan on the Scan Tab [} 1142].
à A preview scan is created for all activated slides.
6. To check and correct the sample detection results during your (preview) scan, use the Sam-
ple Detection Wizard [} 1119]. For more information, see Checking and Correcting Sample
Detection Results with the Sample Detection Wizard [} 1098].
7. To start your scan experiment, click Start Scan.
à The scan starts for all activated slides. Axioscan 7 clinical can only scan one slide at one
time.
8. To monitor your acquisition, use the Acquisition Monitoring Tool [} 1151]. For more infor-
mation, see Monitoring the Acquisition [} 1104].
You have started the scan for your slides.
Info
Viewing/Processing Images During Scan
While it is possible to open and view scanned images while the system is still scanning slides, it
is not recommended to do image processing (using any of the functions available on the Pro-
cessing tab) while the system is active scanning. Image processing will consume computer re-
sources which are then not available for scanning slides, leading to slow down and possible in-
terruption of the scan.
See also
2 Image Views [} 990]
10.3.2.5 Checking and Correcting Sample Detection Results with the Sample Detection Wizard
1. Next to the scan profile of the respective slide, click and select Check and correct
sample detection results.
à The Sample Detection Wizard [} 1119] opens.
2. In the image, check the red rectangle, which defines the region where the automated sam-
ple detection is looking for specimens.
3. If necessary, adjust the red rectangle to include the specimen.
4. To mark additional regions of interest, select a graphical element in the Graphics tab and
set it up in the image.
à An additional region is added in the image. Note: Be aware that the Automatic Sample
Detection Mode will switch to Manual mode as soon as a Scan ROI is added manually.
This means that the modified scan profile is saved and will not detect tissues on other
slides any longer.
5. If necessary, adjust the settings on the left side, which are basically the same as in the Sam-
ple Detection Settings [} 1126] step of the Advanced Scan Profile Editor [} 1120].
6. If you want to see detected focus points, got to Show Focus Points and select which fo-
cus points should be displayed (of the Coarse Focus or the Fine Focus map).
à The focus points are displayed in the image. Note that you cannot adjust the region
when you are in focus mode, only when None is selected.
7. To relocate a point, use drag and drop.
8. To add an additional focus point, right click on the desired position and select Insert Focus
Point.
The functions Shading Reference for Acquisition (only available if you have started a profile
with acquisition) and Shading Reference for Processing are used to generate shading reference
images. The function Shading Reference for Acquisition creates a shading reference and
places it into the calibration manager. This reference can then be applied to all future scans auto-
matically to produce images without shading. With this function no separate shading reference
image is being created. The function Shading Reference for Processing creates a shading refer-
ence file as a separate document which can then be used by the Shading Correction function. It
does not place a reference into the calibration manager. This is to allow interactive shading cor-
rection of an already acquired image. Note that these processing functions cannot process z-stack
images.
While in principle applicable to all type of images, the main purpose is to be used for fluorescence
multichannel scans as creating a good shading reference is often difficult to achieve due to a lack
of suitable calibration samples. For brightfield scans, shading correction is done fully automatic
and this function should therefore not be needed.
The function Shading Reference for Acquisition creates a separate shading reference image for
every channel of the processed image. This image is stored directly to the calibration manager to
be available for the next scans.
Info
200+ Tiles needed
In order to produce a good shading reference image ideally more than 200 tiles should be
used per channel. This usually can only be achieved when scanning with 20x or higher magni-
fications. When scanning with 5x or 10x magnification, use a manual calibration available in
the channel tool of the Advanced Scan Profile Editor (--> Define).
Prerequisite ü You have created an image with all the channels for which you want to create a shading ref-
erence image. For a good reference image it is necessary to have a number of over 200 tiles
per channel.
ü Avoid any over- or underexposure in the input image.
ü Fill the dynamic range of the used camera to at least 30% of the histogram.
ü The acquisition of the input image must be done without activated shading correction.
ü It is recommended to turn off camera binning when acquiring the input image. The use of flu-
orescence slides for the creation of reference images for fluorescence has sometimes disad-
vantages (e.g. sparse distribution of the signal, bleaching). A good alternative is to use a stan-
dard H&E sample and utilize the autofluorescence of this sample. This usually produces good
shading references for a wide range of fluorescence channels (DAPI, FITC, Cy3).
ü It is advised to scan without overlap, besides the other above mentioned settings, the Smart
Profile Selection wizard provides a special profile for this purpose (reference sample). Use
this profile as a start profile for this kind of calibrations.
4. Set the desired parameters. For a detailed description of the parameters, see Shading Ref-
erence for Acquisition [} 88] or Shading Reference for Processing [} 89].
5. Click Apply to start the processing function.
Info
Individual References for Multi Channel Image
While normally best to use the merge channel option, it is also possible to create individual ref-
erence files per channels. This might make it easier to assign the correct references to multi-
channel images with different channel combinations than present in the reference image.
See also
2 Creating a Shading Reference from Tile Image [} 1099]
The Axioscan scans the slides in the order which is defined in the Magazine view. It is possible to
change the order of individual slides within a tray or move whole trays to another position, which
changes the order in which the system scans the slides. This can be done also while the system is
already scanning a slide. This way urgent samples can be moved forward and scanned sooner.
1. To move a slide to another position, grab the active area and drag it to the position you
want.
2. To move a tray to another position, grab the border area of the tray and drag it to the posi-
tion you want.
2. Enter a name and click . Alternatively, enter a name and press Enter.
à The Naming dialog opens.
3. Enter your naming choices for Prefix, Suffix, set the number of Digits and the Initial
Counter Value.
4. In the Format ID text field you can build your own preferred naming convention. This is
done by using variables which are listed in the information pop up window. Enter the de-
sired variable preceded by a % character, e.g. %N for the original name, %P for a fixed
prefix, %X for the date as defined in the computer date settings.
à The parameter IDs are added to and displayed in the Format input field, which would re-
sult in a string like this: %X-%P-%N. The resulting file name is displayed Name Preview
text field.
5. Click OK.
You have created your own naming convention to automatically name the acquired images.
Info
Insert the Trays
Before starting the import, you first have to insert the trays with the slides in the magazine,
only then can the software check for consistency.
This functionality can speed up the scan workflow considerably. It offers a possibility to assign
scan profiles, image names and scene names to a collection of slides and then start scanning, all
as one action. The import is done via a comma-separated values file (csv). For more information,
see General Information and Examples for Import [} 1102].
Prerequisite ü The slides you want to scan are loaded into the system and each slide has a defined tray and
slide position.
ü You know the names of the samples.
ü You have defined suitable scan profiles.
1. In the Naming Definition tool, click Import Name, Profile and Scene names.
à A file browser opens.
2. Navigate to the respective folder and select the .csv file.
3. Click Open.
The names, profiles and scenes information are imported based on the .csv file.
The import is done via a comma-separated values file (csv). For each slide, one row has to be used
with one slide per text line in this format:
1 2 3 4
5 6 7 8
9 10 11 12
Examples
Example Description
Example A 1,1,Mouse1,Profile001,brain1,brain2,brain3
For the first slide of the first tray, the system defines Mouse1 as image
name and applies the Profile001. If the preview detects several
scenes, the system names the first scene brain1, the second scene as
brain2, etc.
Example B § 1,1,Mouse1
§ 1,2,Mouse2
§ 1,3,Mouse3
Example Description
Here the system only applies the image names to the specified tray
and slide position.
Example C § 1,1,,Profile001
§ 1,2,,Profile002
§ 1,3,,Profile003
Here the system only applies the profile names to the specified tray
and slide position.
Notes
§ If the import file contains a profile which is not in the list of the available profiles, the soft-
ware displays a warning and no profile is applied (last selected profile remains selected).
§ If the import file contains a position which is not occupied by a slide, the software displays a
warning and skips the import for this slide position.
§ If you defined more scenes than the system detects during the preview, the system does not
display a warning and stops the naming assignment after the last detected scene.
§ If you defined less scenes than the system detects during the preview, the system does not
display a warning, names all the scenes where names are available and stops the naming.
However, all scenes will be scanned.
§ By default the comma (,) is used as separator. It is possible to select another separator (tabula-
tor, semicolon, space) in Tools > Options > Acquisition > Axio Scan.
1. In the Naming Definition tool, activate Name, Profile and Folder from Barcode.
à Several input mask fields are displayed.
2. In the input mask fields Name, Profile and Sub-Path, corresponding character numbers
defined in the barcode string. You can also define and combine number ranges similar as
you would when printing selected pages, e.g. 1-4,10-14.
3. Define the desired character numbers for the fields you want. If you do not want to use a
certain setting, simply leave the input field empty. Sub-Path defines a folder name which is
created as a subfolder in the storage path defined in the Storage Location tool where
scanned images are stored.
à If for Name a range is defined, the software inserts the text Name from barcode under
Naming (in the Magazine view), as the system does not know the naming until the bar-
code of this slide is read.
à The same applies for the profile, in this case the text is Profile from barcode. If the pro-
file is derived from the barcode, the system takes the label capture information (this in-
cludes the kind of barcode, etc.) from the profile selected on the top of the Magazine
view.
4. As coding of such profiles in the barcode is prone to errors, you can activate Match only
the beginning of scan profile name to use only the first characters to identify a profile. If
so, make sure to use a unique set of characters for the beginning of the profile you want to
use.
You have defined the name, profile or the subfolder based on the barcode. For additional infor-
mation, see Examples when Using a Barcode [} 1104].
Example A
Character string encoded in barcode:
Example B
Barcode: Path_DoeJohn_003_FAST_HE
Example C
In administrating and creating profiles, you might prefer descriptive names to recognize them eas-
ier (e.g.: Brightfield-H&E-strong). The name could begin, for example, with an unique number
used as identification (use only name without the czspf extension). The profile could be named,
for example, 005_Brightfield-H&E-strong. You then enter 1-3 into the profile mask and activate
Match only the beginning of the profile name. Now, only the number 005 is used to identify
the profile. Make sure that only one profile name beginning with the characters 005 exists.
To check the image quality (e.g. focus and exposure time) during the scan, the Acquisition Moni-
toring Tool [} 1151] is available. When activated, camera frames are grabbed from the running
scan and displayed in regular intervals.
Info
Performance
Acquisition monitoring can have a negative impact on performance and should therefore be
used sparingly.
Info
Frame Rate
The frame rate is limited to 1 frame per second (fps) by default. You can globally change the
frame can rate limit under Tools > Options > Acquisition > Axioscan. Be careful when you
change this value, as this could result in e.g. black frames.
1. On the Scan tab, in the Acquisition Monitoring tool, activate Enable for current scan.
2. Click Start Scan to start your scan.
à The Acquisition Monitoring tool displays an overview of the specimen with a red rec-
tangle indicating the currently scanned region.
à An image document called Acquisition Monitoring, which displays the image of the
currently scanned region, is displayed in the Center Screen Area.
3. To check the image quality, click to stop the (live) acquisition monitoring.
à The monitoring pauses and the last acquired image is displayed. The image document
contains the complete information (e.g. the complete z-stack) which you can check.
During the scan process, the system creates temporary files, e.g. for the preview image, the label
image and others. These, together with the file for the actual scan result, are stored in the storage
location path by default. When the scan process is finished, the scan result image is saved. The
temporary files are then deleted. Sometimes, if, for example the scan results should be automati-
cally processed by some other software, the temporary files could cause these external software
to malfunction. Then it would be useful to store the temporary files in a separate folder.
Info
Location of temporary folder
This temporary folder should be located on the same physical storage device so that the final-
ized scan result can quickly be moved to the storage location.
Info
Preview and label image as individual files
You can also permanently store both preview and label image as individual files if these images
are required. For this, go to Tools > Options > Acquisition > Axioscan and activate Keep
separate label image and Keep separate preview image.
If you want to use a separate folder for temporary files, follow this instruction:
See also
2 Storage Location Tool [} 1150]
ZEN as application software relies on the stability and reliability of the used workstations operat-
ing system software. ZEN slidescan uses Microsoft Windows™ as operation system (currently
Windows 10). Therefore, ZEN cannot operate more reliable than the operation system itself. All
measures are taken to ensure a reliable and fast interaction between ZEN and the used operation
system, nevertheless general limitations of this operation system may apply. So it is recommended
to restart the complete system in regular intervals to ensure proper performance according the
following standard operating procedure (SOP).
Scope of application: This SOP is valid for all Axioscan system configurations (incl. control com-
puters) and all software versions.
Prerequisite ü Make sure all scan processes are finished and ZEN is idle. The main status display on the de-
vice must light up continuously green.
1. Close ZEN and all other running application software.
2. Press the Standby button on the front of the Axioscan to shut down the device.
3. Shut down the control computer by means of the function provided by the operation sys-
tem.
4. Switch the control computer on by pressing the on/off button.
à The workstations operating system software has started successfully.
5. Press the Standby button on the front of the Axioscan to initialize the device and wait until
the system LED shines continuously green.
à The Axioscan system unit has been successfully initialized.
6. On the control computer, log into the Windows operating system.
7. Start the ZEN software for Axioscan (ZEN slidescan) on the control computer.
Axioscan, computer and software are now again ready for operation.
The recommended maximum period for continuous operation is five days, assuming a heavy use
of the system. While the system will likely operate for longer periods of continuous use without
problems, we still strongly encourage following this good practice and perform a restart every five
days.
It is important that the system scans all samples with the right focus, producing images with all
the relevant sample parts digitized sharp and not blurry. The main cause for blurry images are two
facts:
1. Samples are usually never perfectly flat
2. Microscope objectives have a narrow depth of focus
At the same time, the system must scan samples quickly. So using an autofocus routine for each
camera frame would not be a good option as this would lead to unacceptably long scan times.
ZEN slidescan clinical therefore employs a elaborate method to ensure optimal focus even with
difficult samples. Instead of continuously autofocusing, a focus map is used which allows the sys-
tem to automatically follow the sample's focus during rapid scanning. The following chapters ex-
plain this in detail.
Note: The explanations given in the following chapters assume the use of autofocusing using
classical image acquisition. Here stage and focus drives are operated in the start/stop mode. This
method is employed for all fluorescence scans. For brightfield imaging, the much faster flash
mode acquisition is used routinely, where the focus drive runs continuously and the camera ac-
quires images via a short strobe illuminated flash. Consequently, focusing is much faster for
brightfield so that the recommendations for reducing the focusing ranges to gain speed are not
as relevant. This means e.g. that the range for fine focusing in brightfield can be made larger and
therefore more robust against unexpected sample variations than recommended for fluorescence.
Axioscan uses a two-step focusing method to give the best results. To understand how and why,
we need to look at a the microscope slide and specimen in terms of flatness. There are two types
of flatness that are important for Axioscan:
One of the first tasks is to define the focus range for the coarse focus objective lens. We have to
be able to find the whole sample and focus the lens on it. The size of the coarse focus range de-
pends on how macro-flat the specimen is.
As you can see in the image above, a specimen that is only slightly tilted requires a lower travel
range for the focusing mechanism than a specimen which is more strongly tilted.
A very important point is the difference between the coarse and fine focus range setup. As de-
scribed in the previous section, coarse focus can be defined using actual z-position values for the
top and bottom of the range.
The image above shows the coarse focus as a range of points between two defined values (the
shown values are just exemplary, actual values will vary). Fine focus is different. Fine focus is a
range that is not defined by a top and bottom, but by the position of its middle or center. The
range is then set by defining a travel distance, for example 30µm. The result is a range that is
15um above and 15 micron below the center point. In the image, this is shown in green as the
range of 30 centered around 4010µm.
To define a focus position, the following happens:
1. 5x lens runs an autofocus over the defined range and sets a point of best focus (4000µm).
2. Axioscan changes to 20x fine focus and corrects for parfocality difference by applying an off-
set, in this case +10 µm (4000 + 10 = 4010).
3. The 20x defined range of 30 µm is then set around the new center focus value of 4010 by+
15 and -15, to give the full range of 30µm.
4. Micro-flatness variations can then vary by+/- 15µm and be successfully mapped for the scan-
ning process.
The autofocus step size or interval is the distance between each focus point as the Axioscan scans
through the z-axis to find the best focus. The size of the interval determines the accuracy of the
autofocus result. If the interval is big, the autofocus will be less accurate or reliable. If the interval
is too small, the autofocus speed will be more precise and reliable, but the focus mapping step in
the scan will take longer. The optimal step size is one that closely matches the depth of focus
(DOF) for the lens being used.
It possible to use camera and illumination settings for focusing that are different to scanning. It is
important not to neglect the imaging settings for focus map creation. Autofocus algorithms work
on the basis of contrast between pixels. If the specimen is not correctly imaged (illumination and
camera), it gets harder for the algorithm to detect the correct focus level for the sample. If the
autofocus algorithm also requires color information, it is necessary to ensure the correct white
balance or color temperature. In short, it is recommended to use the same channel settings for
fine focusing as would be taken for actual scanning image setup.
Use the information in the image below to estimate the number of fields of view (FOV) that a tis-
sue section covers on the slide. This can help when assigning values for focus map point distribu-
tion density.
A focus point distribution strategy is used to place the focus points for coarse and fine focus. As a
general recommendation, the use of the onion skin focus support point distribution method is a
good start and will work on many samples. With the ability to visualize the focus support points in
the software it is also possible to check if a selected strategy produces a reasonable focus support
point selection, see Working with Focus Points [} 1096]. However, keep in mind that it is possi-
ble to add too many support points, which leads to a problem caused by mathematical overfitting
of the focus map. Essentially this means that the focus map might have "valleys" and "mountains"
where the sample is actually flat. The following chapters describe in detail how to approach the
focus point distribution problem.
Small sample sections like mouse testis or TMA cores do not usually require a large number of fo-
cus points in either coarse or fine.
Due to their small size and probable good macro-flatness either center of gravity or number of
points can be used for coarse focus. If number of points is the selected method, use a value of 4
for the number of points. Center of gravity always uses just 1 point per section.
Fine focus is more flexible because at 20x each section in covers about 100 FOVs (10 x 10). A
good strategy here would be to use every Nth tile with N = 2/3. A value of N higher than 3 proba-
bly results in too few focus points for a good focus map. Number of points could also deliver a
good result here with 5-6 points per section.
Fig. 98: Focus points with N = 3 at 20x for every Nth tile
All suggestions depend on sample preparation and with poor preparation even small tissue sec-
tions can be very un-flat requiring more focus points.
Medium sample sections such as mouse brains unsurprisingly need more focus points in coarse
and fine focus.
Fig. 100: Coarse and fine point distribution using every Nth tile
You can see in the image above that for the 5x coarse focus, N=2 gives you six points and at 20x,
N=3 gives 26 points. For a slide of average flatness, this strategy would probably yield good re-
sults. In both cases it is possible to increase the number of focus points to make the focus map
more sensitive. At 5x N=1 is the same as every tile and would produce 21 focus points.
Be careful when using low values of N for fine focus. As you can see from the image, by changing
N=3 to N=2 you add an extra 31 focus points. This slows down the scan process and should be
avoided unless absolutely necessary. Some trial and error is always needed to fine the right com-
promise between speed and focus quality. As always, the flatter the sample the better.
Large sample sections pose a new challenge as they cover a much larger area of the slide, some-
times all of the space under the coverslip.
Fig. 102: Mouse embryo with onion skin distribution at 20x (32 points)
Density is set 0.2 for the scan shown in the image above. This means the spacing of the concen-
tric ring layout used to position points is 20% of the width of the slide in the Y direction. Setting a
low number such as 0.1 and increasing the maximum number of points would result in a focus
map with more densely positioned points.
10.3.4 Reference
In this wizard you can select pre-defined scan profiles. You can select the Contrast Method (Step
1), Sample Type (Step 2, ff.) and enter a Profile Name and Description (last step).
The system has pre-configured profile templates for the most typical slide types and applications
as well as profiles for reference samples for quality control. These templates are stored in a safe
location and cannot be changed by the user. The default scanning objective for most templates is
the 20x objective with the highest NA installed in the system.
The wizard guides you through several steps to find the best profile as a preset for further adapta-
tions by giving you easy to understand choices. You can select the desired option by double-click-
ing on it or by selecting the option and clicking on the Next button at the bottom.
When you finish the wizard, the selected profile from the profile pool are copied to the system
and the Advanced Scan Profile Editor [} 1120] opens automatically to further refine your profile.
Reference Slides
The wizard contains a section called Reference Slides. This section contains seven predefined
profiles. Five of them are used for quality control.
In the factory two specimens (if the system has the Polarization option 3 specimens) are
scanned:
1. BF 20x -> Specimen BF 03 (rat kidney; H&E) from the sample-set brightfield
(474032-9010-000) at 20x (if a 20x objective is available with the system)
2. BF 40x -> Specimen BF 03 (rat kidney; H&E) from the sample-set brightfield
(474032-9010-000) at 40x (if a 40x objective is available with the system)
3. FL 20x -> FluoCells® Prepared Slide #3 (mouse kidney; F-24630) at 20x (if a 20x objective
and fluorescence is available with the system)
4. FL 40x -> FluoCells® Prepared Slide #3 (mouse kidney; F-24630) at 40x (if a 40x objective
and fluorescence is available with the system)
5. Polarization 20x -> Specimen POL 01 (rat knee; Sirius red) from the sample-set polarization
(474032-9020-000) at 20x (if polarization is available with the system)
Using these samples and the reference profiles you can directly scan them so that the image qual-
ity can be compared with the images which were created when the system was built in the fac-
tory. The remaining two profiles are for the special purpose of performing shading correction for
fluorescence (use scanning without tile overlap, for more information refer to chapter Creating a
Shading Reference from Tile Image [} 1099] and the description for Shading Reference for Ac-
quisition [} 88]).
Info
Calibration by service engineer
It is strongly recommended that this procedure is done by a certified ZEISS service engineer.
While customers are allowed to perform this calibration, it is still recommended that they call a
service representative beforehand for guidance or should any problems occur.
Info
Geometric Calibration Slide
The first position of the tray must hold the geometric calibration slide, the second position the
color calibration slide. The tray with the calibration slides must be placed in the first position
within the magazine.
This wizard is only available if a preview scan was previously generated. Once the system has
started the preview generation, you can start the wizard immediately and work in parallel.
In general, you usually work directly in the image of the specimen area to mark the region(s) of in-
terest or delete regions of interest that have been wrongly detected. For more information, see
also Checking and Correcting Sample Detection Results with the Sample Detection Wizard
[} 1098].
Parameter Description
Next Slide Navigates to the next slide.
Keep Display Set- Activated: Keeps the current display settings for all previews.
tings Over All Pre-
views
Sample Detection The settings are basically the same as described in the Advanced
Settings Scan Profile Editor, in the chapter Sample Detection Settings
[} 1126]
Show Focus Points Defines if and which focus points are displayed in the image. Focus
points in the image can be moved by drag and drop and added/
deleted via right click, similar to the Focus Point View [} 1153] in the
Advanced Scan Profile Editor.
– Coarse Focus Displays the focus points of the coarse focus map.
– Fine Focus Displays the focus points of the fine focus map.
The system always shows you a live image from the corresponding cameras (preview camera/scan
camera) while also moving the slide according the input from the operator. This means that you
always work on a physical glass slide.
If you leave the editor, the software validates the profile and you will get a feedback if any correc-
tions are required.
To access all of the functions and properties described in this guide, you have to activate the
Show All mode by activating the checkbox in the upper right-hand corner of the left tool area.
Parameter Description
Next Moves on to the next step of the wizard.
Finish Saves the setup and the changes based on your progress and closes
the wizard.
10.3.4.1.4.1 Overview
Here you can define some global settings for your scan profile. The center screen area of this step
displays specific information about the currently opened profile (e.g. which camera is used for the
preview and which kind of processing is configured).
Parameter Description
Profile Displays the name of the selected scan profile. This name cannot be
changed here.
Description Shows the description of the scan profile. As the information content
of the profile name is always limited, you can insert additional infor-
mation here to highlight the most important profile settings (e.g.,
magnification/z-stack settings/focus map settings).
Slide Selection Only visible if you have started the wizard for a default profile and not
for a specific slide.
Selects the slide on which the profile adaptation is based.
See also
2 Starting the Setup in the Advanced Scan Profile Editor [} 1083]
10.3.4.1.4.2 Label
Here you can select the acquisition parameters for the label area.
The label area ROI (Region Of Interest) is defined by the red rectangle shown in the live image (see
image below). It is possible to adjust the frame in size and position freely to fit with the label on
your slide. The label area is captured with reflected light and a separate microscope camera.
In case you are using the 1x 100 mm x 76 mm tray, four additional controls are displayed because
the slide in that tray is to big for the preview, so it is split in two halves.
Parameter Description
Live Only visible when using the 1x 100 mm x 76 mm tray.
Displays the live image from the active camera.
Parameter Description
Color Mode Selects the color mode for the camera.
– RGB Transmits the image data of a color camera unchanged. This corre-
sponds to the standard operating mode of a color camera.
– B/W Treats the image data of the color channels as grayscale. The data of
related color channels are averaged. The saturation of the camera ap-
pears reduced as a result.
Exposure
– Value field Sets the exposure time manually with the input field and the drop-
down selection of the time unit.
– Auto Compensates for the color temperature of the light source automati-
cally to yield a neutral hue. The entire camera sensor area is mea-
sured. If there are no pure white areas on the sample and Auto does
not yield the desired results, you can also use Pick.
– Pick Enables you to select a reference pixel for white balance from the live
image.
Parameter Description
Orientation Selects alignment of the label on a slide. You can rotate the label im-
age 90° clockwise (CW), counter clockwise (CCW), by 180° or keep
the original orientation (Original).
Parameter Description
None Activated: No barcode or text recognition is used.
Barcode Activated: The system recognizes the barcode and saves the barcode
information as metadata within the image.
If you want to use the barcode information as part of the image
name, you have to apply automated naming and use the keyword
RecognizedCode (%N) to make the barcode part of the image name
(see also Creating Your Own Image Naming Convention [} 1101]).
Test Displays the region of interest with the given orientation and applies
the barcode recognition.
Info
Adjusting the Display Curve
You have the possibility to adjust the display curve (see Display tab on the bottom of the win-
dow). The display settings are stored within in the profile. This has also an impact on the dis-
play of the label in the Magazine view.
See also
10.3.4.1.4.3 Preview
In this step you can select the camera which is used for the sample detection.
In case you are using the 1x 100 mm x 76 mm tray, four additional controls are displayed because
the slide in that tray is to big for the preview, so it is split in two halves.
Parameter Description
Live Only visible when using the 1x 100 mm x 76 mm tray.
Displays the live image from the active camera.
Parameter Description
Color Mode Selects the color mode for the camera.
– RGB Transmits the image data of a color camera unchanged. This corre-
sponds to the standard operating mode of a color camera.
– B/W Treats the image data of the color channels as grayscale. The data of
related color channels are averaged. The saturation of the camera ap-
pears reduced as a result.
Exposure
– Value field Sets the exposure time manually with the input field and the drop-
down selection of the time unit.
– Auto Compensates for the color temperature of the light source automati-
cally to yield a neutral hue. The entire camera sensor area is mea-
sured. If there are no pure white areas on the sample and Auto does
not yield the desired results, you can also use Pick.
– Pick Enables you to select a reference pixel for white balance from the live
image.
Parameter Description
Preview Cam Uses the preview camera for the sample detection.
Parameter Description
Scan Cam Only available, if no scan region (green outline) is defined yet.
Uses the scan camera for the sample detection. This is only recom-
mended if the sample is not visible. The additional Pre-Scan [} 1124]
step is displayed as the next in the wizard if the Scan Cam is selected.
For information on setting the region of interest for preview images, see Setting Up the ROI for
the Preview Image [} 1086].
Info
Avoid Overexposure
Avoid any kind of over exposure of the image (especially outside the slide) when setting up
your scan profiles. This is of general importance for all steps of the editor where camera set-
tings are being adjusted. An overexposed image can result in a sub-optimal shading correction.
If the image is too dim, it will also result in a sub-optimal shading correction. Also, focusing
can suffer with images which are either too dark or saturated.
4 To avoid these issues activate the Range Indicator checkbox on the Dimensions Tab
[} 1029] to see overexposed pixels.
4 Adjust the exposure time of the camera or the Flash settings (duration, intensity) respec-
tively, depending on which acquisition mode you choose. When you see the first red pixels
showing overexposure, decrease the exposure time until the red pixels disappear. Once
the red pixels disappear, decrease the exposure time additionally somewhat to ensure that
nowhere in the image pixels are saturated.
See also
2 Setting Up the Preview Acquisition [} 1085]
10.3.4.1.4.4 Pre-Scan
This step is only visible if you have selected the Scan Cam in the previous step. Prescans are use-
ful in all cases where the preview camera is not able to produce an image with enough contrast
to function in sample detection.
Parameter Description
Only visible if you have configured a motorized focus in the MTB (Mi-
croToolBox).
Find Focus Starts an autofocus search using the current settings.
Opens Live view and shows the live image from the active camera.
Live
Snap
Light Path Compo- Displays parameters of the Microscope Control Tool [} 980]. If Show
nents All is deactivated, only the settings for the objective magnification se-
lection are visible.
Parameter Description
Acquisition Mode Only visible if Show All is activated.
Displays parameters of the Acquisition Mode Tool [} 874].
Channels Displays parameters of the Channels Tool [} 914]. For Axioscan spe-
cific channel settings, see also Channels Tool (Axioscan Specific)
[} 1151].
See also
2 Extended Depth of Focus [} 175]
2 Setting Up a Pre-Scan [} 1088]
Prescans are useful in all cases where the preview camera is not able to produce an image with
enough contrast to function in sample detection. Every sample is different, but here are some sug-
gestions for how to create a contrast rich preview image using the scan camera:
Any focus point which does not lie within sample containing area on the slide will not produce a
focus value and be discarded from the calculation.
Parameter Description
Sample Detection
Processor
– Custom Sam- Displays controls to define a custom sample detection with the help
ple Detection of a image analysis and/or a macro. The Test button allows you to
start a test run of your defined custom sample detection.
See also
2 Defining the Sample Detection Settings [} 1089]
The following settings are only visible, if Automatic or Interactive is selected as Sample Detec-
tion Mode.
Parameter Description
Recognition Type
– Marker Sets the sample detection based on using a felt tipped permanent
marker to mark the tissue area.
Live Update Activated: Updates the image continuously (live) each time a value is
changed.
Specimen/ Maker Displays the display curve for the specimen/markers. Here you define
the upper and lower borders of the specimen via the thresholds. If
you have activated the Live update, you see the result toward the
left of center with a certain delay. The upper border (right side) is the
border for the lighter stains and the lower border (left side) is the bor-
der to influence the darker stains. Moving the right side further to the
right marks the lighter stains. Moving the left side further to the left
marks the darker stains.
Automatic Detec- Activated: Automatically detects the distances between the intensity
tion of High Level values for pixels representing the glass itself (usually very bright) and
Threshold (of Sam- the upper threshold value for the specimen detection.
ple)
Minimum Region Sets the minimum region size in mm2 to be scanned. The system does
Size not detect regions if they are smaller than this value.
Parameter Description
Max. Elongation Only visible if Show All is activated.
Sets the maximum elongation ratio for the detected object. This set-
ting can also help to remove coverslip edges. Coverslip edges are usu-
ally structures where the length ratio between the longer side and the
shorter side is usually high. This means that every structure that has a
lower ratio is excluded.
Parameter Description
Prefer Center for Only visible if Show All is activated. Only relevant for brightfield flash
Shading Scan Area scans.
Activated: the software forces the region, which is used for the shad-
ing correction, towards the middle of the slide to avoid to be influ-
enced by e.g. the coverslip edges. By default this option is deacti-
vated. It is only useful when using objective magnifications of 20x or
more.
If you see in the scanned image tiling artefacts we recommend to ac-
tivate this option and recheck the shading quality. If this option is acti-
vated it can happen that the region used for correction is split into
smaller regions. As consequence, the shading correction scanning is
slower, but compared to the total scan time, the increase by a few
seconds has no major influence on the scanning time.
Prerequisites
§ For Sample Detection Mode, you have selected Automatic.
§ For Recognition Type, you have selected Sample.
§ If two or more objects are very close to each other and thus the two or more objects are seen
as one object, it is advisable to split these objects with the Split function which is available in
the context menu seen after a right-mouse click.
§ If Live update is turned off, click Test to test the entered parameters and show the results in
the middle part of the window. The detected objects (every object is a single scene) is pre-
sented with a green border.
§ To achieve a continuous update, you can activate Live update. Keep in mind that a refresh
can take some seconds after changing a parameter.
§ Once you have determined the appropriate settings, they are saved within the profile. If the
settings should be used in another profile, you can save them separately independent from
the scan profile using the options menu .
Sort Order
In the Sort Order section you can modify the sorting schema (numbering) for the detected ob-
jects (regions/ scenes). You can select from 8 different schemes. The first entry (e.g. Left Right)
provides the first sorting schema and the second entry (e.g. Top Bottom) the second sorting
schema.
Example 1:
Left Right/Top Bottom
The first sorting schema is from Left to Right (red arrow) and the second sorting schema is from
Top to Bottom (green arrow):
Example 2:
Top to bottom/Left to right
The first sorting schema is from Top to bottom (red arrow) and the second sorting schema is from
Left to right (green arrow):
See also
2 Case B: Automatic Mode/Marker Recognition [} 1131]
Prerequisites
§ For Sample Detection Mode, you have selected Automatic.
§ For Recognition Type, you have selected Marker.
The settings are limited to the definition of the marker threshold (comparable to the sample
threshold). The only further adjustable value is the Min region size, also the Shading options,
the creation of the Sort Order and the Sort button have the same functionality as in Case A.
If you use the marker on the physical glass slide, make sure that the marked regions contain speci-
mens. Empty space could lead to an incorrect focus map as the software will use the entire
marked region to create the focus point distribution.
See also
2 Case A: Automatic Mode/Sample Recognition [} 1128]
Parameter Description
Draw Graphics Enables you to draw graphical elements into the view.
– Element Type Selects the type of graphical element. You can choose between Circle
and Rectangle.
– Element Size Defines the width (W) and height (H) of the element for the Rectan-
gle, or the Radius for the Circle.
– Distance from Sets the distance/offset from the upper left-hand corner in X and Y.
Origin
– Grid Definition Defines the number of Rows and Columns of the grid.
Parameter Description
Prefer Center for Only visible if Show All is activated. Only relevant for brightfield flash
Shading Scan Area scans.
Activated: the software forces the region, which is used for the shad-
ing correction, towards the middle of the slide to avoid to be influ-
enced by e.g. the coverslip edges. By default this option is deacti-
vated. It is only useful when using objective magnifications of 20x or
more.
If you see in the scanned image tiling artefacts we recommend to ac-
tivate this option and recheck the shading quality. If this option is acti-
vated it can happen that the region used for correction is split into
smaller regions. As consequence, the shading correction scanning is
slower, but compared to the total scan time, the increase by a few
seconds has no major influence on the scanning time.
Parameter Description
Show Shading Only visible if Show All is activated.
Scan Area Activated: Displays the area used to generate the shading reference
image for this slide. For certain scan magnifications the region is
shown even if it is not used.
Prerequisites
§ For Sample Detection Mode, you have selected Manually.
§ You have selected Draw Graphics.
Info
If you want to mark a sample that is very faint, you can use the Display curve to change the
display settings (particularly by adjusting the gamma curve) to make it possible to see even un-
stained sample of a reasonable thickness.
4 You have the possibility to adjust the Display curve (see Display Tab [} 1043] on the bot-
tom of the window). The display settings are stored within in the profile. This has also an
impact on the display of the label in the Magazine view.
See also
2 Case D: Manual Mode/Grid Definition [} 1132]
Prerequisites
§ For Sample Detection Mode, you have selected Manually.
§ You have selected Use Grid Definition.
You can define a grid with rectangles or circles as Element Types in a regular pattern. The refer-
ence point for this grid is the upper left-hand corner of the specimen area (red rectangle). Based
on this, you can define on offset from the upper left corner in x (A) and y (B) in mm and also the
distance of the elements in x (C) and y (D).
Based on the selected geometrical shape (circle or rectangle), you can define a radius or the di-
mensions in x (F) and y (E). The number of elements in x (in this example, 6) and y (in this exam-
ple, 3) defines the number of elements.
You can see the grid as an overlay in the image. It is also possible to make manual adjustments
such as those used for standard graphics in ZEN.
See also
2 Case C: Manual Mode/Draw Graphics [} 1132]
Appropriate focus map settings are important for ensuring good focus quality throughout the
specimens. The focus map generation is done in two steps:
§ With a low magnification objective to generate the Coarse focus map. For this coarse focus
map, only some focus points are necessary. We recommend the use of the 5x Fluor for the
Coarse focus as the offset between the 5x and the scanning objective (mostly 20x) is mar-
ginal.
§ With a high magnification objective (normally the scanning objective), you generate the Fine
focus map. In this case, more focus points are needed and this is usually defined via a density.
The fine focus map is based on the coarse focus map.
The focus is not a traditional autofocus that needs to hit the z-value with the sharpest image pre-
cisely. The system acquires a z-stack (defined in the next step) and calculates a curve through the
focus value (e.g., contrast value). This curve is used to determine the peak value, which makes it
possible to cover a large range while still maintaining fast focusing. For more general information
about focus maps, see Focus Maps [} 1108].
Parameter Description
Copy Previous Set- Only visible in the Fine Focus Map step.
ting Adjusts your channel settings automatically based on the coarse focus
map.
Only visible if you have configured a motorized focus in the MTB (Mi-
croToolBox).
Find Focus Starts an autofocus search using the current settings.
Parameter Description
Opens Live view and shows the live image from the active camera.
Live
Snap
Light Path Compo- Displays parameters of the Microscope Control Tool [} 980]. If Show
nents All is deactivated, only the settings for the objective magnification se-
lection are visible.
Channels Displays parameters of the Channels tool. For information, see Chan-
nels Tool (Axioscan Specific) [} 1151].
Focus section This section displays the options Mode, Quality, Search and Sam-
pling of the focus control. For detailed information about the param-
eters, see Software Autofocus Tool [} 333].
– Deletes the selected distribution strategy from the list. If you delete
xPol and pPol, the entire group is deleted.
– Opens a dialog to select and add a new focus point distribution strat-
egy to the list.
– Best The autofocus is contrast based, the algorithm tries to find the high-
est contrast along the z-stack.
Parameter Description
Drawback: It is sensitive to detecting dirt, hence it should only be
used for the fine focus, not the coarse focus and not on particularly
dirty slides.
See also
2 Adjusting Settings for Coarse Focus Map [} 1089]
2 Adjusting Settings for Fine Focus Map [} 1092]
A focus point strategy set is applied for each scene, i.e. as a separate detected object in each
case. If you e.g. select six fixed points for the Coarse focus, the software applies six focus points
for each scene.
Certain focus point strategies sets have an additional checkbox called Prefer Border. If this
checkbox is activated the system will put more focus points to the border of the detected speci-
men. This is advisable if the border of the specimen has significantly different focus values com-
pared to the rest of the sample.
If the setting Density is part of the focus point strategy set, this value gives the percentage of
generated focus points in relation to the total number of tiles of the object (e.g. if the object has
2000 tiles and the setting is set to 0.1 (= 10%) the system would generate 200 focus points). As
especially for larger objects too many focus point would be generated (you have to keep in mind
that not always a larger number of focus points will automatically result in higher overall focus
quality), you can define also a maximum number of focus points (Max. Number of Points),
which is advisable. It is rare that >24 focus support points yield better results.
If you use the automatic sample detection and you define a border dilation, the system subtracts
this dilatation before the focus points are distributed, so no focus point is put in the dilation zone.
This applies only for the automatic sample detection.
When using the Onion Skin method to ensure that the focus points are within the sample, the
system moves the focus points a certain number of field of views away from the border. This
value can be adjusted under Tools > Options > Acquisition > Axio Scan with the option Mini-
mum Margin of focus point distribution in tile(s). The default setting is 1. For very thin sam-
ples, e.g. needle biopsies, change this value to 2 to prevent focus support points from being
placed at the edges where little or no sample resides.
Item Description
Onion Skin This is a density focus schema. The resulting focus points are dis-
played as layers of onion skin, providing an even distribution and also
ensuring that the border in particular has enough focus points. This is
the standard setting for the fine focus map. The parameter is a den-
sity with a standard setting of 0.1. This setting means that 10% of the
total number of viewing fields will be used as focus points to calculate
the focus surface. For large specimens this number can get very large,
but a larger number of focus points does not always automatically
mean a better quality calculation of the focus surface. For this reason,
the user can also define a maximum number of focus points. This
number is normally in the range of 24 to 36 points. This focus point
strategy set is recommended for mid-sized to large objects.
Example (Density: 0.1; max number 24):
Item Description
Every Nth Row The system will put a grid over the specimen with a distance between
and Column two focus points of the specified value (N) for the number of camera
viewing fields in the x and y direction. This is a clearly defined way to
create the focus point, even it creates more focus points and will take
longer to create the focus map. The focus point strategy set is recom-
mended for mid-sized and smaller objects.
Example (every 5th tile):
Every Tile The system focuses on every tile. This is particularly time-consuming
for scans with a large magnification and/or large samples, however, it
yields the best results for very uneven specimens.
Fixed Number of Here the user defines a fixed number of focus points per object
Points (scene). The default setting for the Coarse focus is six focus points per
scene.
Example (n = 24):
Item Description
Density The density is similar to that of an onion skin; the difference is that
the pattern is not distributed like the layers of an onion. The distribu-
tion is more evenly spread over the specimen/scene. This setting can
also be used to define a maximum number of focus points. The cre-
ation of this focus map is more random – as shown in the example –
so usually onion skin is preferable to density.
Example (density: 0.1; max number 24):
Grid This setting defines a fixed grid over the specimen to establish the fo-
cus points for the focus surface. The value is in µm. This setting is in-
dependent from the current frame/sensor size of the camera used. If
you want to use a setting that is based on the chip size of the camera,
please use “Every Nth tile”. As seen on the sample, this schema is not
as strict as Every Nth tile, thus Every Nth tile is recommended over the
Grid cell size.
Example (1000 µm 1000 µm):
Item Description
Center of Gravity The system will place one focus point within each scene/object. The
position of this focus point is the center of gravity, thus the whole ob-
ject has only one focus point. This is a useful setting for small objects
(e.g., Tissue micro arrays). Be aware that for certain structures (such
as a half moon), the center of gravity may lie outside the tissue, hence
the system has nothing to focus on. In such cases, select “Number of
focus points” with 1 as the parameter. The center of gravity is a valid
option, e.g., for Tissue Micro Arrays to place only one focus point in
each core.
10.3.4.1.4.7 Scan
In this wizard step you can adjust the scan settings. These settings are important as they deter-
mine the image quality that is subsequently experienced by the user in terms of exposure time,
white balance, etc.
Parameter Description
Copy Previous Set- Adjusts the experiment settings automatically based on the fine focus
ting map.
Only visible if you have configured a motorized focus in the MTB (Mi-
croToolBox).
Find Focus Starts an autofocus search using the current settings.
Opens Live view and shows the live image from the active camera.
Live
Snap
Light Path Compo- Displays parameters of the Microscope Control Tool [} 980]. If Show
nents All is deactivated, only the settings for the objective magnification se-
lection are visible.
Parameter Description
Channels Displays parameters of the Channels tool. For information, see Chan-
nels Tool (Axioscan Specific) [} 1151].
Z-Stack Configura-
tion
– Keep § Interval
Keeps the set interval between the section planes constant if you
change configuration parameters.
§ Slice
Keeps the set number of z-slices constant.
– EDF active Activated: Uses Extended Depth of Focus (EDF) for image acquisition.
– Stitching Con- Selects the configuration that is used for stitching. The following op-
figuration tions are available:
§ None: No stitching is configured. The image is unstitched and can
be stitched later within ZEN or with another software. This option
can make sense if you are scanning large slides only sparsely popu-
lated by sample to avoid stitching artifacts in the image.
§ Online: Stitching is performed during the acquisition. This is the
standard setting as this provides the best performance in terms of
the processing time of the slide.
§ Offline: Stitching is performed after the acquisition. This can im-
prove the stitching results as opposed to online stitching here a
global optimization step can be added as all tiles are already ac-
quired. The time needed for stitching is added to the scan time
however.
Parameter Description
consist of tiles, this is technically necessary to avoid a memory limit on
the operating system which cannot handle 2D images larger than 2
GB.
– Pyramid active Activated: Creates an image pyramid to speed up the viewing of the
image afterwards.
– JpegXR active Activated: Displays option for image compression. For information
about the correlation between compression and file size, see Com-
parison Compression and File Size [} 1140].
Deactivated: Saves the image uncompressed.
See also
2 Extended Depth of Focus [} 175]
2 Adjusting Scan Settings [} 1093]
If you have activated Tools > Options > Acquisition > Axioscan > Show Acquisition Order,
you can select the Acquisition Order in the Acquisition Mode section.
Parameter Description
All channels per The system captures all channels for each tile first before it continuous
tile with the next tile. This is also the default setting and the best choice
for fluorescence scans.
Note: This order is by default only used for fluorescence, as it will
stress the condenser components and reduce the condenser lifetime
in case of TIE or polarization acquisition.
Parameter Description
Mixed Mode This dimension order is only useful when combining brightfield, TIE
and cPol. In this case, the system would need to constantly switch
condenser elements between channels causing a large mechanical
stress on these components which might reduce the lifetime of the
condenser. The system captures all tiles for e.g. BF, afterwards it
switches the filter wheel and/or condenser position and scans all tiles
for the next optical set-up (e.g. polarization). This scheme is repeated
until all BF/TIE/Pol channels are acquired. Afterwards the system auto-
matically aligns these channels.
If the setup contains one or more additional fluorescence channels,
the workflow changes. Considering two BF/Pol channels and one fluo-
rescence channel, you have the following workflow: The system ac-
quires all tiles for one channel for channel one. For the second chan-
nel (Pol), it acquires the first tile of the second channel after it moves
the filter wheel, shutter, etc. and then acquires the fluorescence chan-
nel. Now it switches the optical setup back to the second channel, go
to the next tile, acquire the image and changes back to the fluores-
cence channel setup and acquires again an image. This is repeated
until all tiles are acquired. The reason is that now the second channel
and the fluorescence channel are perfectly aligned. The system now
aligns channel one to the other channels automatically and the fluo-
rescent channel is consequently also aligned to all channels.
All tiles per chan- The system acquires all tiles for each channel before switching any
nel other optical components. This is the fastest way to acquire multiple
transmitted light channels e.g. Brighfield and cPol. It then aligns all
channels (see above). This mode is not recommended to be used on
fluorescent channels as usually there are no common structures which
the alignment algorithm could use.
If any issues regarding a sub-optimal alignment of the channels are
seen, you should select All channels per tile as the most reliable
method.
Info
Geometric Calibration Slide
The first position of the tray must hold the geometric calibration slide, the second position the
color calibration slide. The tray with the calibration slides must be placed in the first position
within the magazine.
In this wizard, you can calibrate your application for polarization acquisition. You have the possi-
bility to create a (new) calibration, or continue with a previously started one. The wizard guides
you through this calibration with a setup and preparation (also of the sample) as well as the polar-
ization calibration itself.
In case the system cannot properly work with the default values, you can also adjust the parame-
ters for the calibration with a calibration file. As an example, the angle of the search area can vary
depending pon your hardware setup.
Here you find the most important functions to operate the Axioscan system. The tab is closely
linked to the Magazine View [} 1142], which can be found in the Center Screen Area.
1
2
2 Start Scan
Starts a scan of all activated/marked slides.
The scan consists of the preview generation (label area and specimen area), sample de-
tection, focus map, and high resolution scan of the specimen for all slides with state
New. The settings are used as defined by the assigned profile. If a preview scan was ac-
quired previously or if regions were already adjusted, they are not performed again.
3 Tool Area
Here you have Axioscan specific tools:
§ Axio Scan Tool [} 1148]
§ Naming Definition Tool [} 1148]
§ Storage Location Tool [} 1150]
§ Acquisition Monitoring Tool [} 1151]
This view gives you an overview of the Axioscan magazine. You can assign profiles and naming
conventions to slides, see the status, and move trays up and down (using drag and drop) to
change the scanning order. The system starts processing slides from top to bottom for all slides
that have been activated (via the checkbox in the Scan column). The view is closely linked to the
Scan Tab [} 1142].
1 Main buttons
These buttons allow you to select and create a scan profile and also to adjust the display
and settings of the magazine view. For more information, see Main Buttons [} 1143].
3 Slide layout
Each row within a tray represents a slide. For more information, see Slide Layout
[} 1144]. Right-click on a slide opens the Slide Context Menu [} 1146] with several options
for the selected slide.
For additional information, see also Additional Information for Magazine View [} 1147].
See also
2 Changing the Order of Slides and Trays [} 1100]
Parameter Description
Smart Profile Se- Opens the Smart Scan Profile Selection wizard [} 1118]. This wizard
lection helps you to open predefined profiles. This is the fastest and most in-
tuitive way to assign and generate scanning profiles.
Select Scan Profile From the dropdown list, you can select the scan profile which is used
used for all slides
for all newly inserted slides. The button opens the following
with status new
options:
– Create New Creates a new profile. If a new profile was created, it will be automat-
Scan Profile ically assigned to all slides in Magazine view.
– Create New Opens the Smart Profile Selection [} 1118] to select a predefined
Profile with profile. The user is guided through a series of questions about the
Smart Profile specimen, resulting in the most appropriate profile for the digitization
Selection Wiz- process. It is recommended to process this profile afterwards with the
ard Scan Profile Wizard.
Parameter Description
– Modify profile Opens the Advanced Scan Profile Editor [} 1120] to adapt all possi-
configuration ble profile settings and allows the user to utilize the full flexibility of
for all slides the system. This wizard is recommended for experienced users.
with state
new
Default Order Resets the default positions of trays and slides. Only if you have
changed the order of trays or slides.
Skip Slide This button is only active if a scan job is running. It stops the scanning
of the current slide and continues with the next slide.
Expand All Expands all trays so that you can see the slides contained within
them.
Collapse All Collapses all trays so that you can open a specific tray without being
distracted.
Each row within a tray represents a slide. Right-click on a slide opens the Slide Context Menu
[} 1146] with several options for the selected slide. The slide columns are always the same.
Parameter Description
Scan Activated: The slide is marked for scanning/previewing.
Deactivated: The slide is not marked for scanning/previewing.
Slide Position Represents the physical position of this slide inside the tray.
Slide Overview Displays the preview image of the slide if a preview was executed. It
shows the label area and the specimen area.
Scan Profile Assigns a specific scan profile to each slide. If a profile was adapted,
the system indicates this with a save icon. Also, the profile name
changed into an edit field. A prefix e.g. T1S1 (Tray 1, Slide 1) is added
before the profile name.
If you click behind the scan profile, the following context menu
opens:
– Adapt se- Opens the Advanced Scan Profile Editor [} 1120]. In this mode, the
lected profile system takes the tray containing the slide from the magazine (only if
for scan of the tray which contains the slide is not already loaded). You can see a
this slide live image from the different cameras (preview camera/scan camera)
and navigate on the slide.
This mode is useful if you want to make any changes, e.g. change the
exposure time, check the focus strategies etc.
Parameter Description
– Check and Only available if a preview was generated.
correct sample
Opens the Sample Detection Wizard [} 1119].
detection re-
With the help of this wizard, you can change the region that the sys-
sults
tem scans afterwards and review the focus points. Within the wizard,
you can switch directly to the next or previous slides. It is not neces-
sary to move the physical glass slides to adjust these settings.
This wizard can be started even if the preview generation for other
slides is still running.
– Show profile Opens an XML viewer which contains all of the details for the scan
content as xml profile. This information is only of interest to more experienced users.
(view only)
Scan Status Displays information about the status of the slide in the scanning
process:
§ new: no preview and no scan was executed
§ preview done: preview (and prescan) was already executed and is
displayed (in the Slide Overview column)
§ finished: the scan was successfully executed
Parameter Description
Image Name Displays the name of the image which will be generated. You can in-
sert a name manually, or the system creates a name according to an
automated naming rule. The selection of the procedure to be exe-
cuted depends on the Name Assignment column.
Name Assignment Selects whether the system creates the name automatically using a
naming rule or specify a name manually. It is also possible to change
the name of the image once the preview has been generated by sim-
ply clicking on the name and then editing it. Depending on the set-
tings under Naming Definition, you can also import names via a CSV
file or you can use the barcode information to define a name as sub-
string of the barcode.
Info
Accessability
This context menu for the tray can be accessed by right mouse click into the narrow gray areas
outside of the slides (left and top).
Assign scan profile Opens a dropdown menu where the user can
assign an existing profile to all highlighted
slides
Mark all slides of selected trays for Selects all slides in the highlighted trays for
scanning scanning (it is possible to select multiple trays).
Unmark all slides of selected trays for Deselects all slides in the highlighted trays for
scanning scanning (it is possible to select multiple trays).
Expand all selected trays Expands all highlighted trays by clicking on the
mouse once. To highlight a tray, hold the Ctrl
key and click on the trays you want to high-
light.
Info
Accessability
You can access the context menu by right click in any gray area of the row representing a
slide.
Move Slide to Scan position Moves the selected slide to the scan position.
Assign scan profile Here you can select a scan profile and assign it
to the selected slide. If you have selected sev-
eral slides, the selected profile will be assigned
to all of them.
Mark all selected slides for scanning Marks all selected (highlighted) slides for scan-
ning.
Unmark all selected slides for scanning Unmarks all selected (highlighted) slides for
scanning.
Reset Scan Status to New Resets the scan status of the slide to new.
Reset Scan Status to Previewed Resets the scan status of the slide to pre-
viewed.
§ If the door is closed (using hardware or software), the system detects all trays inside the mag-
azine and establishes both the tray type and the positions in the tray that are occupied with a
physical slide.
§ The system also checks all slides by default (see Scan column) and assigns the Scan Profile
(defined on the Scan tab) to every slide if new slides are inserted. It is still possible to change
the profile and other settings afterwards (if the slides are not processed or currently scanned),
even if the batch process has already started.
§ Only the inserted slides are displayed. Empty places in the tray are not shown.
§ To enlarge the label image or the preview image of the specimen area simply click on the
small preview image. You will see an enlarged view of this image for a better recognition of
the content. If the scanning for a slide is finished you do not see the overview image of the
specimen you will see a low magnification representation of the resulting image. If you want
to close the image click on the image again, move the mouse pointer of the image or click an-
other slide to enlarge another preview/scanned image. A double-click on the preview image
opens the advanced editor at the Sample Detection step.
Parameter Description
Tray list Displays a list of the trays. A click on the button with the tray number
jumps to the respective tray in the Magazine view and highlights it.
The signal colors in front of the buttons indicate the current status of
the trays. This is the same kind of signal that can be found on the de-
vice itself. Refer to the operation manual for a detailed description. A
double-click on the tray button inserts the mounting frame. If another
frame is mounted, it will be removed automatically.
System Overview Displays various status information of the device. It is the same infor-
mation which is displayed by the main indicator on the device itself
(e.g. ready, processing, warning).
- Open Opens/closes the door depending on the current status. This function
is equivalent to the Open/Close button on the device itself.
Here you can define naming definitions for the acquired files/images. You can select several defi-
nitions for the file names from the dropdown list. The name automatically contains the detected
barcode content if the barcode detection is active for the active profile.
Parameter Description
Naming Definition
options
Parameter Description
- New Creates a new naming definition. The system will ask for the name of
the definition and you can then set up the naming definition in the
Naming Dialog [} 1149]. The name must have at least one character.
- New from Creates a new naming definition based on an already existing tem-
Template plate.
Import Name, Pro- Opens a file browser to import the image names and used profiles.
file and Scene You also have the option to apply scene names. When starting an im-
names port, it is necessary to insert all your slides beforehand. The software
will only apply the imported parameters for slides that have been in-
serted with the status new.
For more information on the import, see Importing Names, Profiles
and Scenes [} 1101].
Name, Profile and Activated: Uses barcode information to assign image names, profiles
Folder from Bar- and sub directories.
code For more information on the import, see Using Barcode to Define
Name, Profile and Subfolder [} 1103].
- Name Defines the positions of the characters in the string that should be
used for naming.
- Profile Defines the positions of the characters in the string that identify the
profile.
- Match only Activated: The system uses only the first characters to identify a pro-
the beginning file.
of scan profile
Deactivated: The system uses all characters to identify a profile. Note
name
that this can be very memory intensive and error prone.
See also
2 Creating Your Own Image Naming Convention [} 1101]
Parameter Description
Prefix Defines the prefix.
Parameter Description
Digits Defines the number of digits.
Format Displays and defines the Format IDs for the naming convention.
Preview Displays a preview of the image naming based on the settings and the
Format.
Format IDs table Displays all available parameters that can be used to create a naming
convention.
See also
2 Creating Your Own Image Naming Convention [} 1101]
NOTICE
Possible data loss when saving on network share via direct network connec-
tion
When using a network share as storage path and a direct network connection, e.g. by specify-
ing UNC network addresses only, no safety measure for caching is in place and data loss is eas-
ily possible.
4 Use the "map network drive" function of the operating system. This function involves
some internal caching mechanisms in case networking problems arise.
Here you can specify the storage location of the created images. The location can be a local path
on the computer as well as a network share to store the data on a remote server. It is recom-
mended to store the images locally, as depending on the network properties the connection may
not be sufficiently reliable to store the data. If a network share is selected, a transmission rate of
1000 MB/s or better is mandatory to avoid problems especially when scanning large slides.
By default the system first stores temporary images in the storage location. For information how
to set up a separate folder for temporary files, see Using a Separate Folder for Temporary Files
[} 1106].
Parameter Description
Dropdown list Displays storage locations that have already been used before.
Parameter Description
Opens a file browser to select a folder to store the images.
To check the image quality (e.g. focus and exposure time) during the scan, the tool Acquisition
Monitoring is available.
Parameter Description
Enable for current Activated: Displays an overview of the specimen with a red rectangle
scan indicating the currently scanned region. Additionally opens an image
document which displays the currently scanned region.
Note: Content is only displayed if the system is in the scanning step, it
is not displayed during other steps like the focus map steps.
See also
2 Monitoring the Acquisition [} 1104]
For Axioscan systems, you can select special contrast methods, like xPol (TL xPol) and TIE.
Info
Polarization tracks
The individual Pol tracks are synchronized, which is indicated by a lock symbol in the Channels
list. Adding the Pol contrast method in the Advanced Scan Profile Editor automatically adds
all available Pol tracks (depending on the hardware). Changing or deleting a Pol track automat-
ically changes or deletes all other tracks. Duplicating a track or changing the order for acquisi-
tion (with the arrow buttons in the channels list) is not possible for Pol tracks.
Parameter Description
Channel list Displays the currently added channel(s).
– Opens the Add Dye or Contrast Method Dialog [} 869]. In the Ad-
Add vanced Scan Profile Editor, a custom version of this dialog opens.
Delete
– Focus Ref. Sets the selected channel as reference channel for focus actions or
stitching during acquisition.
Parameter Description
– Opens the Channels Tool Options Menu.
Options
Lightsource Select the light source and adjusts the corresponding settings. You
can adjust the parameters of the light sources without having to save
these in the hardware settings.
If you select the Use Setting entry, the settings for the light sources
disappear. The light source parameters from the hardware settings
are used instead for the acquisition of the channel.
Dye name (e.g. TIE In the input field after the selected dye you can enter an additional
or TL xPol) name.
– Measure Measures the correct settings for flash intensity and flash duration to
produce an image without saturation but good dynamic range. The
determined settings will be shown in the corresponding edit fields.
Note: In flash mode the camera exposure time is always fixed and
cannot be modified by the user.
– Measure and Measures the correct settings for flash intensity and white balance.
White Balance
– Contrast Selects the contrast type. You have the following two options:
§ Phase Contrast: TIE processing creates an image that looks similar
to Phase contrast.
§ DIC: TIE processing creates an image with looks similar to a DIC
(Nomarski) image.
Parameter Description
Shading Selects the shading type which is applied to the image. For the shad-
ing options in case of Widefield acquisition, see Channel Settings
(Widefield) [} 920].
– User Enables specific shading for the current channel. You can click Define
to create a specific shading reference.
– Onslide Auto- An accumulated mean image is created for the current channel on the
matic slide when using the scan experiment on the shading area of the sam-
ple detection step. This option is part of the Axioscan workflow when
clicking Start Scan. It is recommended to use this option only for ob-
jectives with magnification 20 or higher.
– Offslide Auto- A snap image is created with no slide in the light path which is used
matic as shading reference. This option is part of the Axioscan workflow
when clicking Start Scan. It is recommended to use this option only
for objectives with magnification lower than 20.
This view can be displayed in the main view of the Focus Map Settings [} 1133] steps of the Ad-
vanced Scan Profile Editor [} 1120]. This view enables you to see, edit and verify the focus
points created by the respective focus point strategy.
1 2
2 Live Image
Displays a live image at the current stage position.
3 Overview Image
Displays an overview image with the detected regions and focus points and allows the
setting, editing and deleting of focus points. To edit the region(s), you have to go back to
the Sample Detection Settings [} 1126] step.
See also
2 Sample Detection Wizard [} 1119]
2 Working with Focus Points [} 1096]
Parameter Description
Point table Displays the focus points and indicates if they are verified (by a green
checkmark). The table is linked to the overview image, selecting a fo-
cus point in one highlights it also in the other.
– Tile Region Displays the name of the tile region where the point is set.
Verify Starts the autofocus at the selected position to verify the focus point.
Be aware that the focus values you see during the verify run will not
be completely identical to the focus the tiles will assume at this posi-
tion during the scan. This is due to the fact that the focus used during
the scan is the result of the calculated focus map where the focus val-
ues are interpolated.
10.4 Celldiscoverer 7
10.4.1 Introduction
§ Tiles & Positions - in combination with focus strategies this allows for easy and flexible im-
age acquisition especially for multi-position and multi wellplate experiments.
§ Advanced Processing & Image Analysis - Use the built-in image analysis functions, create
pipelines to run online image analysis and modify experiments based on those results on the
fly.
§ Automation GUI - Automate routine experiments and using scan profiles that can be started
with just one button.
§ arivis Cloud (on-site) Basic - On-site execution of arivis Cloud demo modules.
§ Experiment Designer - Configuration of inhomogeneous acquisition experiments.
§ Extended Focus - Calculation of a completely sharp 2D image out of a Z-stack.
§ Measurement - Advanced interactive measurement tools.
§ Multi Channel - Acquisition of fluorescence and transmitted light images in independent
channels.
§ Software Autofocus - Determination of the optimal focus position of the specimen.
§ Macro Environment - Powerful Python scripts allowing to automate all kind of workflows,
export data and connect to 3rd party application required for the workflow.
§ 3Dxl - Visualization of 3D or 4D image data.
§ Airyscan - Processing of data acquired with Airyscan 2 (LSM only).
§ Airyscan 2 Basic - Multiplex acquisition with 2x parallelisation (LSM 900 only).
§ arivis Cloud (on-site) Advanced - On-site execution of individual arivis Cloud modules.
§ Automated Photomanipulation - Acquisition of multiposition experiments with photoma-
nipulation.
§ Colocalization - Quantitative colocalization analysis between two fluorescence channels.
§ Connect - Advanced functionality of ZEN Connect.
§ Connect 3D Add-on - Extension for ZEN Connect for 3D workflows.
§ Data Storage Client - Connection to ZEN Data Storage database.
§ Deconvolution - Improvement of 3D image stacks via 3D-deconvolution algorithms.
§ Direct Processing - Processing of images directly during acquisition.
§ FRAP - Fluorescence Recovery after Photobleaching (FRAP) analysis.
§ Guided Acquisition - Automatic and targeted acquisition of objects of interest.
§ HDR Confocal Basic - High Dynamic Range (HDR) acquisition mode.
§ Intellesis - Image segmentation based on machine-learning algorithm, using pixel classifica-
tion.
§ Physiology (Dynamics) - Analysis of physiological time series data.
§ Third Party Import - Import of 3rd-party microscopy images into ZEN.
The Celldiscoverer calibration wizard is used for hardware calibration of the system. The individual
components can be selected for re-calibration. Calibration data are stored for comparison.
If you want to perform a calibration using the wizard, the following steps are necessary.
1. In the menu bar > Tools > System Maintenance and Calibration...
à In the System Maintenance and Calibration dialog, the available Calibration proce-
dures will be shown.
7. Select the steps for the calibration procedure by activating the appropriate checkboxes.
8. Click on Next > to run the calibration.
9. After the calibration is done, click Finish.
à The calibration results will be stored in a file and the System Maintenance and Cali-
bration dialog appears again. The Celldiscoverer Calibration Wizard >> item is
marked by a green checkmark.
10. Close the System Maintenance and Calibration dialog with Close.
If the window closes, you have successfully performed the Celldiscoverer calibration.
The Celldiscoverer 7 allows to create customized sample carrier templates, e.g. for IBIDI multi-
chamber slides.
This chapter describes an example of how to design a customized sample carrier template for
multichamber slides.
See also
2 Combining the Slide Holder Template with the Custom Template [} 1160]
10.4.3.2 Combining the Slide Holder Template with the Custom Template
See also
2 Activating the Automatic Sample Carrier Detection for the Customized Sample Carrier Tem-
plate [} 1160]
10.4.3.3 Activating the Automatic Sample Carrier Detection for the Customized Sample Carrier
Template
1. On the Sample tab, under Prescan Options, activate Sample Carrier Detection and click
on Configure.
2. In the Configure Sample Carrier Detection dialog, select Insert 2x Slide –Long and
click on Select.
à The Select Template dialog opens.
3. Under Workgroup Templates, select your customized sample carrier and click on OK.
4. Click on OK to exit the Configure Sample Carrier Detection dialog.
The Sample Carrier Detection will now associate the customized sample carrier template with
the long slide holder. Until changed, every time the slide holder is detected, the customized sam-
ple carrier template will appear.
The ZEN celldiscoverer profile enables you to create and manage specific shading references for
your individual use cases. Those references are saved in collections and can be selected and ap-
plied to your respective experiment.
à A shading reference is created for this channel and available in ZEN. It is added to the de-
fault collection in the shading management.
6. Repeat this step for every channel for which you want to define a reference. Additionally,
you can change objective and magnification with the Celldiscoverer tool in the Right
Tool Area for creating the next reference.
You have created shading references. You can now manage or apply individual references to your
experiment, see Managing Shading References [} 1161] and Applying Shading References
[} 1161].
See also
2 Shading Management Tool [} 1162]
Prerequisite ü You are working with ZEN celldiscoverer and you have created shading references, see Cre-
ating Shading References [} 1160].
ü You have set up your acquisition experiment.
1. On the Acquisition tab, open the Shading Management tool.
à The controls for shading management are displayed. The currently added channels are
displayed in the table.
2. In the Select Collection dropdown, select the collection with the shading references you
want to apply to your current experiment.
à The column Available in the table indicates, if the individual shading reference is applica-
ble/suitable for the currently configured experiment in ZEN. In case the column displays a
gray line, no reference is available for the respective channel. You can select another col-
lection or create a suitable shading reference, see Creating Shading References
[} 1160].
3. Activate the checkbox in the Apply column of the table for each shading reference you
want to apply in your current experiment and start your experiment. Alternatively, activate
Apply Shading Correction in the Channels tool.
The experiment starts and the selected shading references are automatically applied.
Prerequisite ü You are working with ZEN celldiscoverer and you have created shading references, see Cre-
ating Shading References [} 1160].
1. On the Acquisition tab, open the Shading Management tool.
à The controls for shading management are displayed.
2. Click Manage.
à The Shading References Manager dialog opens.
3. In the list on the left, select a collection. A collection can contain multiple shading refer-
ences.
à The shading references of the selected collection are displayed in the list in the middle of
the dialog.
4. Select a shading reference.
à Information about the selected reference are displayed in the Details section on the
right.
5. If you want to enter information about the Specimen, or a general Note, enter the infor-
mation in the respective text fields.
See also
2 Shading References Manager Dialog [} 1163]
This tool enables you to manage shading references for your experiment.
Parameter Description
Select Collection Selects which collection of shading references is displayed and appli-
cable to your channels.
Apply Available The table displays the available shading references of the currently se-
Shading Refer- lected collection.
ences
– Available Indicates if the respective shading reference is available for the current
experiment.
– Apply Activated: Applies the respective shading reference for the current
experiment.
See also
2 Creating Shading References [} 1160]
2 Applying Shading References [} 1161]
2 Managing Shading References [} 1161]
Parameter Description
Collection Displays a list with all existing collections of shading references. The
details of the currently selected collection are displayed to the right of
this list in the shading reference table. A single collection includes
multiple shading references of different channels, magnifications and
hardware settings for different samples or sample carriers.
– + Add Displays a small text field and button to add a new collection.
– Deletes the currently selected collection and all its shading references.
Delete
Shading Reference This table displays all shading references of the currently selected col-
Table lection. When you select one of the references, its detailed informa-
tion are displayed in the Details section on the right.
– Dye Name Displays the name of the dye for the shading reference.
– Objective Displays the objective which was used to create the shading refer-
ence.
– Optovar Displays the optovar which was used to create the shading reference.
Details Section Displays more detailed information for the currently selected shading
reference.
– Copy Opens a dialog to copy the currently selected shading reference to an-
other collection.
– File Name Displays the file name of the currently selected shading reference.
– Specimen Displays a text field to enter information about the specimen that the
reference is used for.
– Sample Holder Displays the sample holder which was used to create the shading ref-
erence.
– Light Source Displays the light source which was used to create the shading refer-
ence.
– LEDs Displays the LEDs which were used to create the shading reference.
– Camera Displays the camera which was used to create the shading reference.
– Objective Displays the objective which was used to create the shading refer-
ence.
Parameter Description
– Magnification Displays the magnification changer which was used to create the
Changer shading reference.
– Emission Filter Displays the emission filter which was used to create the shading ref-
erence.
– Reflector Displays the reflector which was used to create the shading reference.
– Notes Displays a text field to enter information for the shading reference.
See also
2 Managing Shading References [} 1161]
When working with ZEN Celldiscoverer, ZEN offers the functionality to define looped time se-
ries experiments on multiple samples (i.e. with multi carrier insert plates).
Prerequisite ü You have started the ZEN Celldiscoverer profile and you have configured your multi carrier,
e.g. Insert - 6x Petri Dish, or Insert - 3 slides.
1. On the Acquisition tab, activate Experiment Designer.
à The Experiment Designer tool is displayed.
2. Open the Experiment Designer tool and click Multi Carrier.
à The multi carrier mode is activated and the appearance of the tool changes. The carrier
where the stage is located is displayed with a dark blue frame, and the currently selected
carrier is highlighted with lighter blue.
3. Open the Channels or Imaging Setup tool and define the channels for the experiment of
the currently selected carrier.
à The carrier view in the Experiment Designer tool displays the number and color of the
defined channels as well as objective information in the current carrier. The magnification
of all carriers is by default set to the parameters that were configured in the Celldiscov-
erer tool when creating the experiment.
4. To change the objective or magnification, use the Celldiscoverer tool in the Right Tool
Area. Then click Get Current Magnification in the Experiment Designer tool to use this
magnification for the experiment execution.
à The changed magnification is applied to the currently selected carrier and experiment.
5. In the top part of the Acquisition tab, activate the experiment dimensions you want to ac-
quire, e.g. Z-Stack, Tiles and/or Time Series.
à The respective tools are displayed.
6. Use the tools to set up your time series or z-stack experiment.
à The carrier view in the Experiment Designer tool displays the number of defined z-
planes and time points in the current carrier.
7. In the Center Screen Area, use the controls on the two toolbars to define tile regions for
your experiments. For more detailed information about the functionality, see Tiles & Posi-
tions with Advanced Setup [} 344] and Tiles Advanced Setup [} 372].
à You have defined tiles for acquisition. The carrier view in the Experiment Designer tool
displays the number of defined tiles/positions.
8. In the carrier view of the Experiment Designer tool, select the next carrier. Alternatively,
change the position by double clicking a carrier on the Scan Position tab in the Central
Screen Area.
à The selected carrier is highlighted in light blue in the Experiment Designer tool. If the
stage is on a different carrier, a warning and a button to move the stage are displayed.
9. Define suitable experiment settings for the newly selected carrier, i.e. define channels, se-
lect and apply a magnification, define tiles, time series and/or z-stack parameters.
à The carrier view in the Experiment Designer tool displays information for the settings.
10. Repeat the previous steps for all carriers that should be part of your experiment.
à You have set up an experiment for each carrier.
11. For Amount, set the number of loops you want to run across all carriers. For Interval Be-
tween Loops, define the time between finishing one loop and starting the next.
à You have configured an acquisition loop across all carriers.
12. Click Start Experiment.
Your configured experiments across multiple samples start. The acquisition runs a loop across all
carriers as defined by your settings. If you have configured a time series for a specific carrier, the
whole experiment including time series is run before continuing to the next carrier.
See also
2 Experiment Designer (Multi Carrier Mode) [} 1182]
The Auto Immersion functionality is used for adding immersion fluid (water) to water immersion
objectives and automatically renewing the immersion fluid during experiments. Selecting a water
immersion objective activates Auto Immersion automatically.
The following table illustrates the default and the maximum values for respective parameters:
1. In Right Tool Area, open the Celldiscoverer tool and select the water immersion objec-
tive.
à Auto immersion is initiated automatically.
à The Create Immersion and Remove Immersion buttons are available.
The immersion process can be controlled manually using the Celldiscoverer tool in the Right
Tool Area:
See also
2 Celldiscoverer Options [} 1193]
If you want to use Auto Immersion during an experiment, the following steps are necessary.
See also
2 Guided Acquisition [} 319]
2 Celldiscoverer Options [} 1193]
When working with ZEN celldiscoverer, you have the possibility to trigger a remote restart of the
microscope via the File menu.
The wizard calculates the disparity map for alignment of camera and LSM frames in mixed mode
acquisition. Based on a reference image which is acquired in the wizard, the disparity map (an im-
age processing function) is generated that is applied for all LSM frames to ensure a pixelwise over-
lay with the WF frames in the mixed mode acquisition.
4. Click on Next.
à Step 2/6 of the wizard opens.
5. Load the calibration slide in the center slide position (3-slide holder) and click Load Sample
Calibration Info Tab: Display of acquisition parameters automatically set in the wizard.
• Optics: Magnification used to acquire the reference image.
• Lighting: WF track is acquired using transmitted light. Confocal track is acquired with the
640 nm laser.
• Exposure: Exposure time of camera.
• Focus: Focus position (z). The stage position and focus is set automatically. The correct
focus position can also be defined via Auto-Focus. The software autofocus settings can be
set in the Software Autofocus tab.
Acquire the WF and LSM (confocal) image:
• Activate Auto-Focus to find the correct focus position. Otherwise, select Already in fo-
cus.
6. Click on Image acquisition.
7. Click on Next.
à The disparity map is calculated and Step 3/6 of the wizard opens.
à The radial distortion is calculated and Step 4/6 of the wizard opens.
Elastic Registration Statistics: A good result is achieved when the average is in the
range of 1 pixel.
Image View:
• Original: Displays the overlay of WF and confocal frame without applying the radial dis-
tortion correction.
• Affine/Distort/Elastic: Display of the individual correction steps.
Status:
• Successful calculation of radial distortion information is indicated by a green check
mark.
• Successful application of the radial distortion map is indicated by a green check mark.
9. After successful calculation, click on Next.
à The final disparity map image is saved and Step 5/6 of the wizard opens.
The Sample tab is the central point of operating the Celldiscoverer 7 system. At the top of the
tab you can activate two different modes, the Interactive or the Automation mode. Usually the
interactive mode is used for the default workflow when no plate loader is present or required. The
Automation mode is especially suited for running routine experiments in an automated fashion,
see Sample Tab (Automation mode) [} 1173].
If you have selected Interactive, the following parameters are available:
Parameter Description
Load Sample Loads the inserted sample and performs various actions depending on
the selected options under Sample Carrier and under Prescan Op-
tions.
Parameter Description
The detection of the bottom surface to measure the skirt of the sam-
ple carrier is always executed for every individual insert separately.
Eject Tray Ejects the loaded tray with the sample carrier.
Position Shows the current position of the sample carrier. When the tray is
ejected the position is Load, e.g. the system is ready to be loaded
with a sample carrier. When the carrier is in scan position (above the
objectives), the stated position depends on the current carrier type. As
an example it could state WellPlate or Petri Dish 6x, A1.
Sample Carrier
- Select... Opens a dialog to select the sample carrier template you want to
work with. Inside the selection dialog it is also possible to define and
specify your own custom sample carriers, for example your favorite
brand for a 384-well plate carrier. Note that you have to use the pro-
vided plates at any time to avoid collisions.
Starting with ZEN 3.8, a new 3x sample carrier is available, so make
sure to select the correct template to avoid collisions. To select the
correct 3x slide template, check the printed material number on the
frame.
- Measure Bot- Activated: Measures the bottom thickness of the sample carrier using
tom Thickness special optics. Make sure the used sample carrier has the correct bot-
tom material selected. When glass or COC is selected as a material the
thickness measurement will have much smaller search range. In case
of Polysterol as the selected material the search range will be large in
any case.
Note that the measurement may not be correct for embedded sam-
ples.
- Get Material Performs the material determination again at the current XY position.
Carrier Data After the sample carrier detection during the prescan (see Prescan
Options below), a graphical sketch of the sample carrier data will be
displayed here. Additionally detailed information about the carrier are
shown.
If the sample carrier detection option is not activated, you can enter
the sample carrier data here manually e.g. Material, Refractive In-
dex, Thickness etc..
- Material Here you can select the correct bottom material, e.g. Glass/COC or
Polystyrene (PS).
Parameter Description
- Refractive In- Here you can modify the pre-selected values if required.
dex
- Thickness Shows the measured bottom material thickness. You can adjust the
value if required.
- Skirt Per default this parameter can not be edited. You can switch it on un-
der Options > Celldiscoverer > General > Allow manual adapta-
tion of skirt.
This value is measured automatically for every sample as soon as the
tray is loaded. It defines the distance from the surface of the tray to
the bottom surface of the actual carrier, e.g. the well plate.
- Default Con- Specifies the container which is approached initially, e.g. to detect
tainer cover glass thickness and measure the skirt height.
- Max. Focus Shows the current upper z-limit of the focus drive.
Position
- DF Search This field shows whether the Definite Focus (DF) search range is re-
Range stricted by the z-limit.
If a warning is displayed you can do the following:
§ Increase the z-limit (Under Options > Celldiscoverer)
§ Reduce the imaging depth
§ Check the refractive index
Prescan Options
- Sample Carrier Activated: The system automatically detects which type sample car-
Detection rier category is used. The result is displayed in the Carrier Data sec-
tion. The Configure button allows to assign a special carrier template
to carrier category. For example the Pre-Scan recognizes a 96 well
plate (category). This "recognition event" can be assigned to sample
carrier template "MyFavorite96Plate".
- Create Carrier Activated: An overview image of the sample carrier is acquired with
Overview the Pre-Scan camera. The overview image is displayed in the image
document area after the Pre-Scan.
- Read Bar- Activated: The system automatically detects barcodes on the sample
codes carrier. Currently the system can read codes placed on the short side-
walls of well plates or on top of the carrier, e.g. on a slide.
With the Automation mode it is possible to create scan profiles that combine all the pre-scan op-
tions with an actual ZEN experiment. The Automation mode can be used for special sample carrier
inserts, e.g. for 2x Slide (long or short), 3x Slide or 6x dish.
Parameter Description
Default Scan Pro- Here you can set a default scan profile. This profile will be used if new
file sample holders are inserted.
If no profiles exists, you first have to create a new profile via the Op-
tions menu , see below:
Parameter Description
- Options § New Scan Profile…
Creates a new scan profile.
§ Open Profile Configuration
Opens the Profile Configuration [} 1176] dialog. There you can
edit an existing scan profile.
§ Rename
Renames the selected profile.
§ Save as
Saves the user profile under another file name.
§ Delete
Deletes the selected profile.
Start Prescan Starts a prescan of all selected slots or the inserted sample (if no plate
loader is used).
To select a slot for a prescan, go to Magazine view and activate the
corresponding checkbox in the Process column.
Start Scan Starts the scan of all selected slots or the inserted sample.
Eject Tray Ejects the tray which contains the sample holder.
Import File If you click on this button you can import a file (*.csv format) which
contains configuration data for the system.
Typically the following data can be imported:
§ Magazine, Slot, Barcode, Scan Profile,ImageFileName, Sub-
Path
Under Separator you can select the separator used in the *.csv file
(e.g. Semicolon, Tab)
If you click on Show a preview of the imported data will be displayed.
Storage Location Here you can specify the storage location (file path) for all created im-
ages.
All image sub paths, which might be read from the configuration file
will be inserted below the defined file path.
Naming Definition Here you can define naming definitions for the acquired images. You
can select several definitions for the file names from the dropdown
list. The name automatically contains the detected barcode content if
the barcode detection is active for the active profile.
Plate Loader Sta- The display of this section depends on your system configuration.
tus
If you do not use a plate loader only the sample carrier which you
have configured is displayed in the Magazine view.
If you work with a plate loader, its magazine with the different slots is
displayed. Common plate loaders can have up to 4 magazines. One
magazine can contain up to 12 slots. In the slots you put your sample
carriers containing the sample.
Parameter Description
The functions mentioned below only apply for the plate loader.
- Currently Shows the location (in the hotel of the loading robot) of the sample
Loaded: holder which is loaded currently.
- Detect All Checks all magazines for slots which contain sample carriers automat-
ically.
- Unload Unloads the currently selected sample carrier from the Celldiscoverer
back to its slot in the magazine.
- Reset This button is only shown in case of an error with the plate loader
(e.g. Plate loader is blocked). The respective message is shown in red.
In this case all controls will be disabled and the plate loader will stop
working. NOTICE! In case of an physical obstruction, e.g. by a
misplaced sample carrier, you first have to remove the sample
carrier before you click on Reset.
By clicking on Reset you can try to restore the system status of the
plate loader. The system then tries to solve the problem using some
internal error checking routines and additional actions.
Magazine 1 - ... Here you see the graphical display of the magazine(s) and its lots. The
magazines are numbered from 1 - 4 (depending on the available mag-
azines). Under a magazine each slot is displayed as blue button. The
slots are numbered as well (e.g. from 1- 12). The icon in front of a
button shows the loading status. Following status are possible:
§ gray = empty slot
§ blue = occupied slot
§ blue blinking = carrier is currently on the stage
§ yellow = problem detected
§ red = Error occurred during processing
§ green blinking = Slot has status prescanned
§ green = Scanning the slot is finished.
Parameter Description
Selected Profile Shows the name of the selected profile. Under Profile Description
you can enter a short description, if desired.
- Automatic In case of the option Automatic Carrier Detection the scan profile can
Carrier Detec- contain data for different carrier types. They are visible inside the
tion Sample Carrier list view.
- Fix Carrier As- Only one specific carrier type is assigned to the scan profile. This is
signment useful when working with the same carrier type all the time.
Parameter Description
Options
- Create Carrier Created and save an sample carrier overview image during the pres-
Overview can which is store inside the actual image data file as an attachment.
- Detect Occu- In case of a sample carrier with multiple inserts, the systems automati-
pied Positions cally checks for empty positions.
- Read Bar- Read the barcodes from the respective position on the sample carrier
codes itself (position depends on the carrier).
Sample Carriers This list view allows managing the sample carrier types supported by
the current scan profile. The + button allows to add more carrier
types. The Delete button deletes the carrier type including all as-
signed scan experiments.
Carrier Configura- This area shows the respective carrier configurations and allows to
tion modify them. For normal wellplates there will be only one data set
visible, but for carriers with multiple inserts it is possible to define dif-
ferent configuration for every available insert
- Use same When activated the same configuration will be used for all sample
sample carrier carriers during the processing.
configuration
for all posi-
tions
Assigned Scan Ex- Here you can assign the actual ZEN experiments. For carriers with
periments multiple inserts it is possible to assign an individual experiment to ev-
ery insert, e.g. it is possible to run a different experiment for every
petri dish when using the 6x petri dish.
- Use same scan Allows assigning the same experiment to every individual position of
experiment the carrier, e.g. use the same experiment for all petri dishes of a 6x
for all posi- petri dish holder.
tions
When you are on the Sample tab in the left tool area, the Navigation view displays the loaded
sample carrier. In general the view is used for navigation purposes. Depending on the carrier, you
can use the Scan Position tab to navigate to the single containers of the carrier (e.g. of a petri
dish) by simply double clicking on the desired container. The stage will move automatically to the
center of the selected container. In case of carriers with multiple inserts (e.g. 6x petri dish insert)
the tab allows you the switch between the different inserts.
Parameter Description
New Tab Live/continuous image will be displayed in a new tab.
Bring Navigator Enlarges the live/continuous image to full view inside the Navigation
into View window (works for all three options New Tab, Separate Container,
Navigation View).
In general the Magazine view is used to get an overview of the status of the sample carrier(s)/
scan items in the tray of the system or in the magazine(s) of the plate loader.
The display of this view depends strongly on the system configuration. If you use the system with-
out a plate loader the configured sample carrier is displayed.
Parameter Description
Expand All Expands all scan items so that all of them are visible including there
configurations. (This button is only visible when a plate loader is
present.)
Collapse All Collapses all scan items so that only the short information overview
and the processing status is visible. (This button is only visible when a
plate loader is present.)
Mark All Activates the processing markers for all scan items. When clicking
Start Prescan or Start Scan those items will be processed, if their
status allows for those actions (new, prescanned, stopped).
Default Order Restores the default order of slots. Their order can be changes via
Drag & Drop. Newly detected carriers will be added at the end.
Select All Selects all items and allows the execution of a context menu entry for
all shown elements at once.
- Show Profile Open a XML viewer to inspect the complete scan profile configura-
tion.
The options below are opened via right click on the corresponding slot (e.g. Magazine:1 Slot:1).
Parameter Description
Assign Scan Profile Open a list with all available scan profiles. A click on the desired pro-
file assigns it to the currently selected items.
Mark all scan Activates the processing checkbox for all selected slots and their scan
items of high- items.
lighted trays for
processing
Unmark all scan Deactivates the processing checkbox for all selected slots and their
items of high- scan items.
lighted trays for
processing
Parameter Description
Collapse all high- Opens all selected sample carriers.
lighted holders
The options below are opened via right click on an individual sample carrier inside the list (scan
item).
Parameter Description
Move to Scan Posi- Moves the stage to the respective scan position. Depends from the
tion sample carrier and a possible insert and therefor from the scan item
type.
Mark all high- Opens the resulting scan image as an individual document inside the
lighted scan items normal document area.
for processing
Unmark all high- Resets all precessing checkboxes for all selected scan items.
lighted scan items
for processing
Reset Scan status Resets the status of the scan items to "new".
to New
When working with ZEN Celldiscoverer, the Experiment Designer tool offers a special mode
to define a repeated experiment loop over a multi carrier insert. For information about the param-
eters of the standard mode, see Experiment Designer Tool [} 287]. When you activate the multi
carrier mode, the appearance of the tool changes and provides the following functionality:
Parameter Description
Get Current Mag- Applies the current magnification to the currently selected carrier.
nification
Paste Pastes the previously copied experiment setup to the currently se-
lected carrier.
Options
– Import from Opens a dialog to import experiment blocks from existing experi-
Experiment ments, see Import Experiment Blocks Dialog [} 289].
– Export to Ex- Opens a dialog to export your current experiment blocks as an experi-
periment ment.
– Clear Selected Deletes all experiment settings for the currently selected carrier.
Carrier
Parameter Description
Carrier View Displays a graphical representation of the current multi carrier. The
carrier where the stage is located is illustrated by a dark blue frame.
The currently selected carrier is highlighted in a lighter blue. If the cur-
rently selected carrier is not the one where the stage is located, a
warning is displayed below the view and you have a button to move
the stage. Each carrier also displays the information of the set experi-
ment parameters, like number of tiles, z-slices, time points and chan-
nels, as well as information about the selected magnification.
– Move Stage Only visible if the currently selected carrier is not the one where the
stage is located.
Moves the stage to the currently selected carrier to set up the focus,
exposure time, etc.
Loops Across Car- Defines the loops across all the carriers.
riers
– Amount Defines the number of loops that are run for all carriers.
See also
2 Running Looped Time Series Experiments on Multiple Samples [} 1164]
The Celldiscoverer tool is located in the Right Tool Area. Note that the tool is not visible in Au-
tomation mode. The tool is used for controlling the system hardware components, like objec-
tives, beam splitter, filter wheels, light path etc..
Parameter Description
Objective List Here you can easily switch between the objectives and pre-magnifica-
tion. The color bar on the objective buttons indicates the color for the
respective stage limit indicator inside the Navigation tab.
If you select AutoCorr objectives (motorized correction collar) you can
additionally adjust the relevant settings like Bottom Thickness or
Imaging Depth.
Tank Level Shows the current filling level of the immersion fluid tank.
Create Immersion The system automatically adds the immersion fluid to the selected im-
mersion objective. If the fluid was added once, the button is grayed
out.
Remove Immersion The system automatically removes the immersion fluid from the se-
lected immersion objective.
Parameter Description
Bottom Thickness Only visible if an AutoCorr objective is selected.
Sets the bottom thickness. We recommend not set the thickness here,
but to let the system determine the bottom thickness automatically by
activating Measure Bottom Thickness on the Sample tab.
Beam Splitter List Selects the desired beam splitter from the list. If you change the beam
splitter the corresponding emission filter from the list below is
changed as well.
Filter List Selects the desired emission filter from the list. A change here will not
affect the selected beam splitter.
Pipette Position If you click ON, the system moves to the pipette position, where it is
possible to add reagents to the sample. The tip of the pipette will be
located at the center of the optical axis indicated by the blue crosshair
inside the Navigation tab. Make sure the height adjustment for the
pipette tool is correctly adjusted to the current carrier geometry.
Microscope Con- Opens the Microscope Control dialog. There you can see and adjust
trol the full light path of the system. We recommend to only adjust set-
tings in the light path, if you know what you are doing.
The Celldiscoverer 7 allows the combined acquisition of camera and LSM tracks. This unique
Mixed Mode Acquisition ensures the precise overlay of the widefield and confocal/Airyscan im-
ages. The resulting file contains image(s) of all aligned channels, e.g. for seamless analysis work-
flows.
In the Mixed Mode Settings of the Acquisition Mode window, a pixelwise overlay of the wide-
field and confocal frames is set. Optimal image alignment is achieved by registering the LSM im-
age to the camera image. Therefore, a so-called disparity map is automatically created for each
Celldiscoverer 7 system using the Mixed Mode Disparity Map wizard [} 1167]. As a result, four
different image processing functions are automatically carried out when the mixed mode acquisi-
tion is activated to correct for variations in scaling (related to total magnification and pixel size),
distortion, position/offset, and pixel numbers (cropping of the image).
See also
2 Parameters for LSM Imaging Modes [} 875]
2 Acquisition Mode Tool [} 874]
By clicking on Combined, the direct overlay of WF and confocal frames is automatically adapted
to the tracks for all zooms and magnifications. For the combined acquisition of WF and confocal
tracks in the mixed mode, no scan field offset and rotation is possible and the bit depth is fixed to
16 bit per pixel. The mixed mode is available for Widefield, LSM confocal, Airyscan HS, and
Airyscan MPLX tracks. The following tracks can be combined:
Zoom: Zoom is automatically adapted to both camera and LSM track. The widefield image is in-
terpolated to correspond to the LSM image. Zoom is affecting image size and sampling. The num-
ber of pixels (frame size) is constant when changing the zoom.
Frame size: is adapted according to the active sampling. Changes of the frame sizes (e.g. via Pre-
sets) influence the sampling.
Sampling: The channel with highest demand for sampling is master and defines the default.
§ Confocal: xy sampling set for 1.0 x Nyquist (2 times sampling) to achieve optimal settings for
confocal resolution. The camera frame is cropped to the LSM frame.
§ Camera: Adjusts the LSM parameters to match the physical pixel size of the camera.
LSM specific settings (for more information, see Parameters for LSM Imaging Modes [} 875]):
§ Scan Speed
§ Direction Monodirectional or Bidirectional
§ Line Step
§ Averaging
Mixed Mode Setting with Widefield and Airyscan HS/ Airyscan MPLX HS track:
Sampling: The channel with highest demand for sampling is master and defines the default.
§ Confocal: xy sampling set for 1.0 x Nyquist (2 times sampling) to achieve optimal settings for
confocal resolution. The camera frame is cropped and resampled to match the LSM frame.
§ HS/ MPLX HS: xy sampling set for 1.5 x Nyquist (3 times sampling) to achieve better SNR.
The camera frame is cropped and resampled to match the LSM frame.
§ Camera: Adjusts the LSM parameters to match the physical pixel size of the camera.
For separate acquisition of WF and confocal tracks, press the Separate button in the Mixed
Mode Settings. The frame size of the camera is reset to maximum. The frame size of the LSM
track is not changed.
Depending on the selected track in the channels window, the Acquisition Mode tool is adapted,
when the Separate tab in the Mixed Mode Settings is active.
For an active WF track, all camera relevant parameters are shown (for details see the chapter for
Acquisition Mode [} 874]). The frame size of the camera is reset to Maximize. Other parameters
are unaffected.
For an active LSM track, all relevant scanning parameters are shown.
10.4.17 Airyscan HS
On the Acquisition Mode tool, the Airyscan HS specific parameter options are the following:
Parameter Description
Sampling
– Confocal XY sampling set for 1.0x Nyquist to achieve confocal resolution with
increased signal-to-noise with Airyscan detector.
For information about the general parameter options of the Acquisition Mode, see Acquisition
Mode Tool [} 874].
The Celldiscoverer 7 contains the specific Airyscan MPLX HS (high sensitivity) mode. This track
type cannot be combined with Confocal, HS or WF tracks. Mixed Mode Acquisition is not pos-
sible.
The Airyscan MPLX HS track is a selection option in the Imaging Setup.
On the Acquisition Mode tool, the Airyscan MPLX HS specific parameter options are the fol-
lowing:
Parameter Description
Sampling
– CO-2Y XY sampling set for 1.0x Nyquist to achieve confocal resolution with
increased signal-to-noise with Airyscan detector.
For information about the general parameter options of the Acquisition Mode, see Acquisition
Mode Tool [} 874].
See also
2 Acquiring LSM 900 Images with Airyscan 2 Multiplex Modes [} 1200]
2 Acquiring LSM 980 Images with Airyscan 2 Multiplex Modes [} 1202]
This tool is used to set up dispensing events for certain container(s) on a sample carrier. To per-
form dispensing, the system moves the sample carrier to the dispensing position where you can
easily add dispensing fluid to the selected container(s). The Dispensing checkbox is displayed by
activating Time Series.
Parameter Description
Sample Carrier Shows the selected sample carrier which will be used for dispensing.
Options
- Repeat every Here you can adjust the interval between dispensing events during an
experiment.
E.g. if you adjust the value to 5 min, the software will pause the run-
ning experiment after 5 minutes and allows you to add a substance to
your sample.
- Dispensing Here you can adjust the total number of dispensing events for an ex-
events periment.
- Select contain- If you click on the Select button, the Container selection dialog
ers opens. There you can select the containers in which you want to add
the fluid.
10.4.20 Perfusion
To enable perfusion experiments, insert the POC-R dispensing chamber into the Insert plate for
perfusion.
Do not connect the perfusion tubes when loading the sample to avoid any damage.
If the sample has reached the imaging position, the perfusion tubes can be connected. By clicking
Measure Bottom Thickness the skirt height and bottom thickness are determined.
Parameter Description
General
- Use Auto- When using the 0.5X after-magnification the chip window on the
mated Camera camera is adjusted automatically in order to remove the dark areas in-
Frame Correc- side the corners due to the limited FOV for (only!) this after-magnifica-
tion tion.
- Enable Auto- If activated and you work with a plate loader, the Automation mode
mation Mode is automatically activated on the Sample tab.
(restart re-
quired)
- Disable empty Activated: In Automation Mode, the sample is loaded without Au-
check for au- tomatic Sample Carrier Detection.
tomation We recommend this option if there are problems with the Automatic
Sample Carrier Detection in Automation Mode. The sample car-
rier type has to be defined in advance.
- Allow manual Activated (not recommended): the skirt height of the carrier can be
skirt adaption adapted manually.
Caution: This is not recommended! Modify the skirt height only if it is
known exactly. Otherwise, there is the risk of crushing the objective
into the sample carrier.
- Ask User to Activated: Shows a warning during loading of the sample carrier if
Adapt Experi- the sample carrier does not fit to the actual experiment. Allows you
ment with In- either to adapt the sample carrier or to leave the experiment un-
serted Sample changed.
Carrier Caution: The adaptation of the experiment to the actual sample car-
rier will delete all tile regions/positions.
Parameter Description
- Use Default Activated: Uses the default thickness detection.
Thickness De-
Deactivated: This is not recommended. Displays controls where
tection (recom-
you can adjust the cover glass thickness detection options manually
mended)
for trouble shooting.
- Use Aberra- Activated: Attempts to improve the result of the thickness measure-
tion Correc- ment by considering aberration effects. Per default this setting is de-
tion activated.
Prescan Options
- Show Images Activated: Displays the images used for recognition inside the docu-
Used for ment area after the Pre-Scan. This option should only be activated for
Recognition troubleshooting the Pre-Scan sample carrier recognition.
- Show Live Im- Activated: Displays the live images used for the automatic sample
age during carrier calibration inside the document area after the Pre-Scan. This
Automatic option should only be activated for troubleshooting the Pre-Scan sam-
Sample Carrier ple carrier recognition.
Calibration
Z-Limit above Sur- Displays controls for adjusting the default z-limits of different objec-
face tives to increase the available XY traveling range.
The icons inside the DF Search Range column indicate if the avail-
able z-range is sufficient for performing a Find Surface operation.
Auto Immersion -
Reimmersion Set-
tings
- Reimmersion Configures the time interval and travel distance for automatic immer-
During Experi- sion events. The event that takes place first activates the renewing of
ment Every the immersion during the experiment.
Pump Settings
- Remove Wa- Adjusts the duration for the removal of the applied water. The set
ter time defines the duration for which the suction pump is switched on
(default: 5000 ms). Note that a small water droplet may remain.
- Apply Water Adjusts the duration for the application of water (default: 2500 ms).
- Waiting Time Sets the waiting time after a refill that pauses the image acquisition to
after Refill allow the water to distribute equally (default: 2000 ms).
Phase Gradient
Contrast Settings
Parameter Description
- Automatic A phase gradient contrast image is calculated from two single images
Half Pupil An- acquired with two different angles of the half pupil.
gle Adapta-
If activated, the optimal choice of the angles is calculated depending
tion
on the current X/Y Position insides the well.
Airyscan HS and Airyscan MPLX HS tracks are selected as a detection mode in the drop down
menu of the LSM tab.
Prerequisite ü Microscope and hardware components are switched on and are ready for operation.
ü The ZEN software is installed on your computer.
1. Start the software as described in the general chapter Starting Software [} 22].
2. On the profile selection window, click ZEN lattice lightsheet.
à The software starts and a dialog for stage/focus calibration is displayed.
3. Click Calibrate Now to start the calibration. This step is needed to obtain absolute stage
coordinates when the hardware was newly started. You can skip this step, if the hardware
was not turned off after the last calibration.
4. On the top of the Acquisition tab, click Select in the Sample Carrier section. Alterna-
tively, click Detect for an automatic sample carrier detection.
à A dialog to select a sample carrier template opens.
5. Select the sample carrier template that is used for the experiment and click OK.
à You have selected the sample carrier for your experiment.
6. On the Acquisition tab, in the Imaging Setup tool, click +Lattice Lightsheet to add a
Lattice Lightsheet track. Alternatively, the track can also be added in the Channels tool, or
the Lightsheet/Aberation Control tool.
à You have started and prepared the software for Lattice Lightsheet. You can now set up
your experiment.
Parameter Description
Navigation Control Enables you to navigate the sample in a microscope equipped with a
motorized stage, see Stage Tool [} 985].
– Auto Suggest Gives a suggestion to conduct a cleaning stroke after a defined num-
ber of scans.
Parameter Description
Auto Immersion Activated: Enables Auto Immersion and performs it asynchronous to
the experiment.
– Pause experi- If the set time interval has elapsed, the experiment is set on hold and
ment for im- one pump stroke is applied.
mersion
Parameter Description
– Wait for Ac- If the set time interval has elapsed, the system waits for the current
quisition to step to finish before applying water.
finish
Prime Pump The immersion fluid is continuously flushed and renewed until the
Stop button is clicked to finish this process.
Parameter Description
Focus Sheet Moves the light sheet perpendicular to the illumination objective lens
axis.
Focus Waist Moves the light sheet along the optical axis of the illumination objec-
tive lens axis.
Parameter Description
Find Beam Reflec- Focuses the sample on upper cover glass surface.
tion
Focus Sheet and Moves the light sheet along the optical axis of the illumination objec-
Waist tive lens as well as perpendicular to the illumination objective lens
axis; aligns the light sheet for best image quality.
Carrier Tilt Correc- Automatically levels the sample across all activated tile regions. Only
tion available if Tiles is active and at least three tile regions are defined
and activated.
Store Focus Defines the focus position in z with respect to the upper cover glass
surface.
Recall Focus Sets the focus to the defined focus position. This is only active if
Store Focus was executed.
10.5.6 Deskew
Parameter Description
Settings Allows the management of settings files, see General Settings [} 83].
– Cover Glass Image frames are deskewed and transformed to an orthogonal coor-
Transforma- dinate system based on the coverslip geometry.
tion
Parameter Description
Settings Allows the management of settings files, see General Settings [} 83].
Deconvolution Activated: Displays the Deconvolution and PSF Settings tab to set
up a deconvolution. For detailed information, see Deconvolution (ad-
justable) [} 95].
Deskew Activated: Displays the Deskew tab with the functionality of the re-
spective processing function, see Deskew [} 1198].
Select Subset Creates an image subset used for processing. For detailed descriptions
of the parameters, see Create Image Subset [} 108].
You can export the temperature data which are logged during the acquisition of temperature se-
ries images (cryo images) in ZEN. The export depends on the software version you have started.
Prerequisite ü You have opened the temperature series image to export the temperature data.
Prerequisite ü You have opened the temperature series image to export the temperature data.
1. In the Cryo Temperature tool, click on Export Temperature Data.
à The temperature data is displayed as a chart.
2. In the Export tab, click on Export Table.
3. In the file browser, select the storage location and click on Save.
You have exported the image temperature data.
Temperature data of images is saved in the metadata. To display the information directly in the
image you need to create an annotation which can be displayed in the image.
Multiplex modes allow a faster image acquisition with Airyscan 2 detectors. The acquisition is par-
allelized in the Y-direction, which allows to process full SR or confocal resolution though not every
line in Y was scanned during acquisition. The Airyscan 2 detector has nonetheless acquired data
for all lines in the image.
1. On the Acquisition tab, in Imaging Setup select Airyscan MPLX in the drop-down list.
à The respective Airyscan mode menu will be displayed in the Acquisition Mode tool.
2. Click on SR-2Y to activate a parallelization of 2 lines in the Y-direction. Alternatively, if no
super resolution is required, click on CO-2Y.
Multiplex modes allow a faster image acquisition with Airyscan 2 detectors. The acquisition is par-
allelized in the Y-direction, which allows to process full SR or confocal resolution though not every
line in Y was scanned during acquisition. The Airyscan 2 detector has nonetheless acquired data
for all lines in the image.
1. On the Acquisition tab, in Imaging Setup select Airyscan MPLX in the drop-down list.
à The respective Airyscan mode menu will be displayed in the Acquisition Mode tool.
5. The slider between these buttons allows to select any sampling between these sampling
points.
6. It is also possible to select a desired image format, e.g. 1024 x 1024 pixels. The slider will
then display the sampling factor deriving from the selected zoom or crop area.
10.7.1.3 Using Smart Setup with Airyscan LSM 980 and LSM 900
After selecting samples and overall preference for speed (single track, more crosstalk) or signal
(multi track, less crosstalk), you can choose LSM confocal or Airyscan 2 detection modes in Smart
Setup.
10.7.1.4 Using the Sample Navigator with LSM 980 and LSM 900
The Sample Navigator wizard is a tool to find the focus plane and quickly acquire an overview
scan of your sample. You can also simplify the search for a region of interest for the actual imag-
ing experiment. Together with Smart Setup, it allows to set all basic settings for a new sample
without extensive search of the sample by the eyepieces.
à A wizard shows up which guides you through the required steps. On the left you find
general settings, in the middle temporary information and steps to do.
2. Put a sample on the stage and center the sample with some structure over the center of
the objective.
The preferred objective to use is a 2.5x overview objective. Alternatively, 5x and 10x objec-
tives can be used if a Axiocam is available on your system. If no Axiocam is present, the
LSM T-PMT will be used for detection.
3. Select a checkbox with the wavelength that matches the label of your sample.
à You are asked whether this sample was already focused, or if a new sample was put on
the microscope. With a new sample, you are asked to focus the objective a bit closer
than the putative focus plane. This is because the autofocus will move away from the
sample when searching for the focus plane.
4. Click on Objective is now close to the sample to confirm.
à The selected coverage is displayed by a red area on the sample carrier drawing.
à This guides you to Step 3, which selects the Kondensor Filter in case the T-PMT was se-
lected as detector.
8. Since Sample Navigator uses transmitted fluorescence with the T-PMT, a laser blocking
filter needs to be inserted. This filter is located in one of the DIC positions of the condenser,
and labelled T-FL. Also, correct Köhler adjustment is needed. Setups with Sample Naviga-
tor have a line marking at the condenser carrier, which helps to find the right position.
10. Click on Focus Detection to start the autofocus, and observe the image in the middle.
11. If the image is over- or underexposed, the light intensity needs to be adjusted with the
slider Light Source Intensity.
12. In case the autofocus result is not sufficient, correct it by manually focussing the sample
with the focus drives or CTRL+ mouse wheel.
13. Click on Finish to start the automatic overview tile scan.
The overview image can be used to navigate to ROIs, by a double-klick on an interesting struc-
ture. This will re-center the stage. Proceed with normal image acquisition by setting up acquisition
parameters for a higher magnification objective.
To acquire confocal images you first have to setup the acquisition parameters and configure your
experiment. Therefore we recommend to use Smart Setup as this will automatically give you cer-
tain suggestions for the experiment configuration, e.g. Airyscan acquisition or camera based ac-
quisition. In the following guide you will learn how to use Smart Setup and acquire a first confocal
image. Due to the huge variety of samples (and suitable experiment configurations) this guide
shows the necessary basics only.
2. Click on Add.
3. Select the dyes used in your experiment. Double-click on a dye entry to add it to the list.
à You will see the image of your sample in the center screen area.
7. Search the desired sample area and focus with the joystick. Alternatively use the mouse
wheel while pressing the Ctrl key.
9. Click on Set Exposure to automatically adopt the sensitivity of the detector to the sample
brightness.
10. Optional: If you work with weakly stained samples, you can try to increase the laser power
under Channels > Lasers. Then repeat the last step by clicking on Set Exposure.
You have successfully acquired a confocal image. Save the image in the Images and Documents
tool or under File > Save.
LSM Plus is an advanced imaging function in confocal mode. It delivers higher resolution and
better SNR by optimizing the acquisition settings and an additional linear deconvolution process-
ing of the data. LSM Plus can be used in confocal LSM mode with 1,0x confocal sampling and
lower sampling factors down to 0.8x Nyquist. However, the best resolution results are achieved
with higher sampling factors.
Chose LSM confocal as acquisition track for standard confocal acquisition. The following channels
are available for confocal acquisition:
Info
LSM confocal tracks can be used in a multitrack acquisition with either NDD (Non Descanned
Detector) tracks or Airyscan SR tracks.
3
7
4
8 6
10
11
2 Spectral display
Shows the emission spectrum of the selected dye(s) assigned to active or non active
channels of the displayed track and the laser lines used for excitation.
3 Detection range
A bar slider shows up for each activated channel. The slider color matches the color cho-
sen for the corresponding channel. Channel 1 to Channel 3 are aligned from left to right.
The width of the sliders represent the detection range covered by the respective channel.
Change position of a slider with the left mouse button using the cross which appears
when the cursor is moved over the slider. Accordingly, use the arrows at the edges of a
slider to expand or shrink the width of the slider. Moving and adaption is only possible
for internal channels. Channels with emission filters like BiG.2 or Airyscan 2 cannot be
adapted in their spectral range unless a different filter is selected.
6 Reflection
Check the box for imaging in reflection mode. This mode provides an easy tool to get a
better understanding of surface structures.
9 Main dichroic and laser line icon for visible lasers (445, 488, 514, 543, 561, 594,
639 nm)
The two icons are one for the main beam splitter and one for the laser line which is cou-
pled to the port facing the corresponding main dichroic filter wheel. When selecting a
laser line either in this tool window or otherwise (see Lasers Tool or Channels Tool
[} 914]) the matching main dichroic filter is set automatically. The filter changes again
upon selection of an additional laser line or a different laser line if necessary. Deactiva-
tion of a line does not change the main dichroic filter. In case no matching dichroic is
available the 80/20 dichroic is selected. You can change the dichroic to a self-chosen set-
ting if needed. Note that only after chosing an additional line or unselecting and reselect-
ing a different (combination of) line(s) the filter will be changed again automatically.
10 Main dichroic and laser line icon for in-visible lasers (405 nm, Multiphoton Laser)
The two icons are one for the main beam splitter and one for the laser line which is cou-
pled to the port facing the corresponding main dichroic filter wheel. When selecting a
laser line either in this tool window or otherwise (see Lasers Tool or Channels Tool
[} 914]) then the matching main dichroic filter is automatically set. The filter changes
again upon selection of an additional laser line or a different laser line. Deactivation of a
line does not change the main dichroic filter. In case no matching dichroic is available the
80/20 dichroic is selected. You can change the dichroic to a self-chosen setting if
needed. Note that only after chosing an additional line or reselecting a different (combi-
nation of) line(s) the filter will be changed again automatically. Make sure to deselect all
lines first before selecting a new (combination of) laser (lines).
10.7.2.2 Airyscan SR
Chose Airyscan SR as acquisition track for superresolution imaging with highest image quality. The
following channel is available for Airyscan SR acquisition:
1 Spectral display
Shows the emission spectrum of the selected dye assigned to Airyscan 2 and the laser
lines used for excitation.
Multiplex modes allow a faster image acquisition with Airyscan 2 detectors. The acquisition is par-
allelized in the Y-direction, which allows to process full SR or confocal resolution though not every
line in Y was scanned during acquisition. The Airyscan 2 detector has nonetheless acquired data
for all lines in the image.
Airyscan MPLX can be selected in the drop down list of the Imaging Setup.
For information on how to acquire LSM images with Airyscan multiplex modes, see Acquiring
LSM 900 Images with Airyscan 2 Multiplex Modes [} 1200] or Acquiring LSM 980 Images
with Airyscan 2 Multiplex Modes [} 1202].
Chose LSM Lambda as acquisition track for spectral imaging with subsequent dye unmixing (emis-
sion fingerprinting). The mode allows to separate strongly overlapping emission signals.
The following channels are available for LSM Lambda acquisition:
Info
Lambda tracks cannot be used in a multitrack acquisition.
1 Create OFP
Activates Online Fingerprinting for the currently selected lamda track, see LSM Online
Fingerprinting [} 1230] for general information.
4 Resolution
Sets the spectral resolution with the drop down list. The Number of Scans that are per-
formed for the selected resolution is displayed on the right side. Depending on the scan
head configuration the acquisition of the emission fingerprint image is either done in par-
allel with one illumination of the sample (32 channel configuration with 8,9 nm or lower
spectral resolution), or in several illumination steps (32 channel configuration with
4,3 nm or higher spectral resolution; 6 and 3 channel configuration).
Selects LSM Online Fingerprinting as acquisition track for spectral imaging with automatic subse-
quent dye unmixing (Emission Fingerprinting). The mode allows to separate strongly overlapping
emission signals with final reduced data volume as no raw data are stored. It also speeds up data
generation in case the parameters for unmixing are known.
Online Fingerprinting can be activated also after a lambda mode acquisition has been defined
with the button Create OFP, see LSM Lambda [} 1228]. In this case all settings including detec-
tors, lasers, detection gain etc. will be transferred. Online Fingerprinting can also employ addi-
tional detectors, including the NIR detection channels.
The channel for Online Fingerprinting is selected automatically.
Info
Online Fingerprinting tracks cannot be used in a multitrack acquisition.
3 Resolution
Selects the spectral resolution with the drop down list. The Number of Scans that are
performed for the selected resolution is displayed on the right side. Depending on the
scan head configuration, the acquisition of the emission fingerprint image is either done
in parallel with one illumination of the sample (32 channel configuration with 8,9 nm or
lower spectral resolution), or in several illumination steps (32 channel configuration with
4,3 nm or higher spectral resolution; 6 and 3 channel configuration).
4 Dye Selection
Selects one or more dyes from the unmixing spectra database. Online Fingerprinting re-
quires pre-defined spectra which are used for the automatic subsequent unmixing
process. The settings for Online Fingerprinting (resolution, spectral width) should ideally
match the acquisition parameters for the unmixing spectra.
The drop down list Reference Spectrum displays all reference spectra available for lin-
ear unmixing which have been stored at earlier sessions. A double click on the spectrum
name loads it as a reference spectrum.
For each selected spectra, a Spectral Gain slider is added to the channels menu. This
gain adjustment allows to balance the reference spectra in the unmixing result, even
when excited by the same laser line and detected mostly by the same detector(s).
Chose NDD as acquisition track for multiphoton imaging. Using non descanned detectors for
imaging when exciting with a multiphoton laser provides a significantly better image quality for
deep tissue imaging.
Info
Filters in front of NDDs have to be inserted/changed manually. The indication of changed fil-
ters to ZEN software needs to be done with the MTB software. It is possible to do this when
ZEN is running by clicking on Apply after changing the filters as needed.
The following channels are available for NDD acquisition (all channels are optional; if non-des-
canned detectors are not available, NDD tracks cannot be set up; the maximum number of chan-
nels in reflection/transmission mode is limited to 5 and dependent on the type of stand used):
Info
NDD tracks can be used in multitrack acquisition with other NDD or LSM confocal tracks.NDD
tracks can be used in multitrack acquisition with other NDD or LSM confocal tracks. When the
tracks are switched between NDD and other modes during an experiment, the system shows a
safety warning which, however, does not impair data acquisition but is only commenting on
the intermediate state where the correct safety measures kick in.
1 Display of detection range as colored bar for NDDs positioned in reflection mode
This area displays the detection range of all selected channels. Only detectors which col-
lect the fluorescent signal reflected from the sample are represented in this line.
Dyes at the extreme red end of the wavelength spectrum can be imaged with the 730nm laser
and the mandatory external 2 channel NIR detector. Both channel mode and lambda mode for
spectral unmixing are available.
The NIR detector is displayed in the Imaging Setup [} 1221] below the internal VIS detection, due
to the external position of this detector. The channels can be activated and the split point of the
SBS is set automatically. The detection bands within the detector cannot be changed in the tool
as they are defined by the user exchangeable filter sets.
To activate the detection in confocal channel mode, you can either use Smart Setup for a fully
automatic setup, or select the respective NIR dyes and activate a matching NIR channel according
to the emission spectrum. The SBS and laser MBS positions are set automatically.
In the Channels tool, the additional channels and the 730nm laser are available like all other
channels and laser lines. Note that in photon counting mode of the channels, the 730nm laser
should be best operated at minimum power levels.
If you have selected LSM lambda mode for acquisition, a unified detection range slider allows you
to define the detection range including the selected NIR dyes. The appropriate detectors are auto-
matically added to cover the selected detection range. SBS and laser MBS positions are set auto-
matically as well.
The detection ranges available in the ZEN depend on the filter cube mounted in the NIR detector.
The NIR filter cube needs to be entered correctly in the MTB application in order to have the cor-
rect menu and automatic setting of SBS filters.
The NIR filter cubes are user exchangeable by the same mechanical interface as the BiG detector
filter cubes. The following filter sets are available for the cubes:
ZEN offers the possibility to acquire images with FLIM (Fluorescence-lifetime imaging microscopy)
contrast in collaboration with PicoQuant. For this, a LSM 980 is combined with additional hard-
ware (lasers, detectors) TCSPC electronics and software by PicoQuant. The software setup used
for this case consists of two parts: ZEN (with special components in MTB and a FLIM application li-
cense) and SymPhoTime 64 by PicoQuant. SymPhoTime 64 is running on a separate PC that is
connected to the system PC. The intensities of the pulsed laser lines from PicoQuant are adjusted
via an attenuator controlled by the NI-DAQ driver. The laser power and pulse frequency is con-
trolled with SymPhoTime 64 or the FLIM Acquisition tool (via its connection to SymPhoTime 64).
The attenuator is necessary to achieve low laser intensities, which cannot be done by only reduc-
ing the electrical laser power.
To be able to acquire images with FLIM, you have to activate a certain option that allows control
of ZEN via its macro interface.
See also
2 Setting Up and Starting a FLIM Acquisition [} 1237]
Prerequisite ü The hardware and MTB are set up. In the active MTB configuration, the FLIM Laser module
and the wavelength of the laser line are set.
ü You have set up ZEN for control via the macro interface, see Preparing ZEN for FLIM Con-
nection [} 1236].
ü ZEN is set up and running on the LSM PC.
ü You have activated the FLIM application license in Tools > Modules Manager.
ü SymPhoTime 64 by PicoQuant is set up and running on the FLIM PC.
1. On the Acquisition tab, open the Imaging Setup tool.
à The track selection is displayed.
2. Add a LSM Confocal track.
à The track is added and a FLIM button is displayed in the Imaging Setup tool. Experi-
ments that combine FLIM acquisition with other channels (e.g., Airyscan, widefield) are
not supported. Note that if you add other tracks (e.g. Airyscan), this button is not dis-
played.
3. In the Channels tool, activate the laser line for FLIM acquisition as specified in the MTB.
4. In the Imaging Setup tool, set up the beam path. If you want to use pulsed excitation, first
select the pulsed laser line and then choose a suitable MBS (mean beam splitter) at the in-
visible laser port. The MBS at the visible laser port should be set to Plate. Configure the de-
tection beam path inside the LSM scan module and select the FLIM detector. Alternatively,
load the default experiment for FLIM.
5. Go to the Acquisition Mode tool and define your settings. Make sure you have selected
Frame acquisition, set Averaging to None and use mono-directional scan. Maximum im-
age size for FLIM acquisition is 4096x4096 pixel. Note that the online display for FastFLIM is
only available for images up to 1024x1024 pixels and will be switched off for larger frame
sizes.
à In case the acquisition requirements for FLIM are not fulfilled, the FLIM button is dis-
abled and a tooltip provides further information.
6. If you want to configure a Z-Stack and/or Time Series experiment, activate the respective
checkbox on the Acquisition tab and define the settings in their respective tools.
à You have set up an acquisition experiment.
7. In the Imaging Setup tool, click FLIM.
à The FLIM Acquisition dialog opens.
8. In the Files tool, define the naming of the files. The name must not contain ".", they are
converted into "_".
9. In the Measurement Settings tool, define when the FLIM acquisition is stopped.
10. In the Laser Settings tool, adjust your laser settings for the individual laser line(s). In addi-
tion, the intensity for the shutter controlling All Lasers must be adjusted. Also define the
Repetition Rate according to the fluorescence lifetime. In general, the repetition rate
should be as high as possible to achieve the highest possible photon count rate. However,
the time window after a laser pulse should still be large enough to allow the population of
excited dye molecules to deplete completely before the next laser pulse.
11. Click Test to test your current acquisition settings.
à A continuous test acquisition starts and is displayed as image document in ZEN and Sym-
PhoTime 64. You can adjust your settings while it is running. To finish, click Stop.
12. Click Measurement to start the actual measurement experiment.
à The acquisition starts and all controls in the dialog are disabled. In the Count Display
tool, you can see statistical information.
à The measurement is stopped based on the settings in the Measurement Settings tool.
See also
2 FLIM Acquisition Dialog [} 1238]
1 Experiment Information
If the configured experiment in ZEN is a Z-Stack or Time Series experiment, this section
displays the respective information (z-planes, time points). Use this information to follow
the acquisition.
2 Tabs
This section contains two tabs. The Maintain/Info tab provides further information
about the acquisition currently configured in ZEN and all metadata that are transferred to
SymPhoTime 64. This serves only information purposes, the parameters can only be
changed via the main ZEN tools (e.g. the Acquisition Mode tool). The Application tab
displays controls for acquisition settings.
3 Files Section
Here you can define names for the files, see Files Section [} 1239].
4 Measurement Settings
This section allows you to define when a running acquisition is stopped, see Measure-
ment Settings [} 1239].
5 Laser Settings
This section enables you to adjust individual laser settings, such as the intensity, see
Laser Settings [} 1240].
6 Statistics
This section displays acquisition statistics like the Sync (= laser repetition rate), the aver-
age count rate or the maximum number of photons for each channel.
7 Button Bar
This section contains the three main buttons to control the acquisition, see Button Bar
[} 1240].
See also
2 Setting Up and Starting a FLIM Acquisition [} 1237]
Parameter Description
Group Defines a group name for the generated file(s).
Parameter Description
Stop Manually Only visible if Z-Stack and Time Series are deactivated in ZEN.
The acquisition stops when it is stopped manually.
Stop After Elapsed Sets the time after which each acquisition is stopped. For a z-stack,
Time after the defined time the acquisition of the next z-plane is started.
For time series, after the defined time the acquisition of the next time
point is started.
Parameter Description
Photons in Bright- Sets the number of photons after which each acquisition is stopped.
est Pixel For a z-stack, after reaching the defined photon number the acquisi-
tion of the next z-plane is started. For time series, after reaching the
defined photon number the acquisition of the next time point is
started.
Stop After Frames Sets the number of frames after which each acquisition is stopped.
For a z-stack, after the defined number of frames the acquisition of
the next z-plane is started. For time series, after the defined number
of frames the acquisition of the next time point is started.
Parameter Description
Individual Lasers Activated: Enables you to adapt the absolute Power of the laser as
well the relative Intensity for the respective laser. Keep the default
power settings to ensure the smallest pulse width of the laser. To in-
crease the laser power, adjust the Intensity.
All Lasers Adapts the intensity for all lasers. Must be higher than 0%.
– Standard Selects the standard laser pattern which operates one pulsed laser at
a time at the defined repetition rate.
Parameter Description
Test Starts a continuous acquisition for testing and adjusting the FLIM ac-
quisition parameters.
Measurement Starts the FLIM experiment and disables the controls in the Settings
windows of the dialog.
FCS (FCS) records temporal changes in the fluorescence emission intensity caused by single fluo-
rophores passing the detection volume. It provides quantitative localized measurements of physi-
cal parameters including mechanisms of transport, molecular mobility, and densities of fluores-
cently labeled molecules based on temporal fluctuations which are detected via their emission sig-
nal intensity. In the most basic configuration, FCS examines the inherent correlations exhibited by
the fluctuating fluorescent signal from labeled molecules as they transit into and out of a specified
excitation volume. The most easily observed change in intensity is the fluctuation of the concen-
tration of fluorescent molecules. The temporal auto-correlation of the recorded signal intensity
provides a quantitative measurement of the strength and duration of the intensity changes. These
parameters are used to calculate the average number of molecules and their average diffusion
time through the excitation volume. From this, further parameters can be deduced like concentra-
tion and size (shape) of the molecule. The dual-color variation, termed Fluorescence Cross-Correla-
tion Spectroscopy (FCCS), is utilized to probe two species labeled with different fluorophores.
FCCS can extend investigations to the examination of biochemical reactions between two part-
ners, such as reaction rates, kinetics, fractions of binding or reacting molecules, and mobilities of
a complex formed between the partners.
For successful FCS measurements both, the used hardware and the sample must meet certain
standards. FCS-based-techniques measure fluctuations caused by single molecules within a small
volume of typically less than a femtoliter which needs to be precisely defined. Hence a high-end
objective paired with highly sensitive detectors and a low noise highly stable laser source is a pre-
requisite. Especially for FCCS measurements color shift effects of optical elements have a negative
impact onto the results. Hence one must keep the measurement spot on the optical axis/center of
the scan field when FCCS data are acquired. The stage rather than the scanner need to be used to
position the measurement spot in the very center of the scan field.
FCS measurements need a single photon counting detector and a laser wavelength suitable for
fluorophore excitation. FCCS measurements are typically performed observing two fluorophores
excited with different wavelengths and using two counting detectors.
LSM 980 provides several detectors which can be operated in counting mode and hence can be
used for FCS and FCCS measurements.
The most basic configuration includes two counting capable channels, more advanced systems
provide 5 or even 7 counting capable channels. In addition, BiG.2, an external detection unit
which can be mounted onto the scan head of each of these configurations, can be used for FCS
and FCCS.
FCS and FCCS measurements are either performed using dissolved molecules/dyes or using cul-
tured cells observing low to medium abundant proteins (few pM up to µM) at endogenous ex-
pression levels.
For successful measurements the preparation of the sample must meet some prerequisites.
When analyzing labeled molecules in solution the labeled molecule/dye should be dissolved in a
water/buffer solution and diluted such that the concentration is between 1-50 nM. After dilution,
the solution must be put into a chamber with a # 1.5 cover-glass bottom. The glass bottom thick-
ness/quality is influencing the measurement which relies on a precisely defined focal spot. Make
sure to put enough media to cover the entire bottom of the chamber. Common dyes used for this
kind of experiment are for example any kind of Alexa Fluor® dye.
For cultured cells the endogenous expression of fluorescently labeled proteins is best achieved us-
ing modern genome editing techniques. This keeps the expression level low enough for successful
FCS and FCCS measurements.
Imaging Setup
Parameter Description
Continuous Opens a new FCS document and starts a continuos FCS measurement
according to the settings defined in the Channels Tool.
Channels Tool
For description of the parameters in the Channels tool, see Channels Tool - Measurement Set-
tings [} 1243].
Parameter Description
Detector Counting Shows counts per second (Count), correlation amplitudes (Correlate)
Tool or counts per molecule (CPM). The values are updated whenever a
FCS acquisition is running. The values displayed can be averaged over
1, 3 or 10 seconds.
Parameter Description
Draw FCS spot Defines positions in xy (FCS spots) for FCS measurement using the
scanner for positioning.
Parameter Description
Update Z Allows to assign different Z positions to different FCS spots. Z-Focus
positions are only indicated and relevant for FCS spots, not for any
other experiment regions..
File Menu
Parameter Description
New FCS Docu- Opens a new FCS document in the Center Screen Area. Existing data
ment can be added to the new document by copy and paste or drag and
drop from stored documents to rearrange and combine data sets.
Parameter Description
Correlation view Opens when data acquisition starts. It shows count rate and correla-
tion graphs for all active acquisition channels.
Parameter Description
Measurement Here you can enter the period of one measurement.
Time
Repetitions Here you can enter the number of repetitions for one measurement.
Adjust Opens the Adjust Pinhole wizard to perform an automatic count rate
versus pinhole position scan.
Z-scan Opens the Z-Scan [} 1244] wizard to perform a Z-scan ant plot the
count rate averaged value dependent on the Z position to find the op-
timal focus position.
Fit Model Here you can select a fit model for fitting the data of the selected
channel after finishing the measurement.
A different fit model can be selected for each FCS/FCCS channel.
Info
In case two lasers and two detectors (for the visible light path) are defined, the plot shows
more curves, one for each laser/detector pair. Potential cross-talk effects can be excluded
when defining only one laser and detector for the adjustment procedure. The curves match
the color of the detector defined in the Imaging Setup or Channels tool.
Parameter Description
Invisible light path Only displayed if the laser with invisible light (405 nm or IR laser) is
defined for the acquisition.
– Coarse
Moves the pinhole position in X or Y over the whole range while ac-
quiring FCS data for each position. The count rate data are averaged,
fitted to match a gaussian curve and plotted against the pinhole posi-
tion. The plot is displayed in the center area in the Calibration View
tab. It is possible to change between the view tabs during acquisition.
– Fine
Moves the pinhole position in X or Y about 10 micrometer around the
currently stored value while acquiring FCS data for each position. The
count rate data are averaged, fitted to match a gaussian curve and
plotted against the pinhole position. The plot is displayed in the cen-
ter area in the Calibration View tab. It is possible to change be-
tween the view tabs during acquisition.
– Pinhole position Before the adjustment scan is started, the slider and the input box be-
sliders/ input neath the show the currently stored pinhole position in X and Y. The
box sliders can be moved and the position edited once the scan has fin-
ished. This allows to move the positions to a different value than sug-
gested by the system. The then indicated value will be stored when
clicking Finish to leave the wizard. The Pinhole position in the plot is
shown in micrometer. One micrometer corresponds to approximately
one motor step.
Visible light path Only displayed if the laser with visible light ((lasers from 445 nm to
639 nm) is defined for the acquisition.
Cancel Cancels the procedure and keeps the existing pinhole position.
Finish Takes over the new pinhole position as general position also for all
other LSM acquisition functions.
Z-Scan provides an automated procedure to optimize focus position for auto- or cross correlation
measurements. The Z-Scan provides the following parameters:
Parameter Description
Current Position Displays the current position. This position is changing during the ac-
tual scan procedure.
Range Define the range of the Z-Scan by either typing in a value or using the
scroll arrows of the Range edit box.
Step Width Define the step width by either typing a value or using the scroll ar-
rows of the Step Width edit box.
Parameter Description
Start Perform the scan by pressing Start.
During the scan the count rate is plotted for the selected channels us-
ing the channel assigned colors.
Finish Click Finish to take over this Z position for the FCS measurement.
Cancel Click Cancel to discard the values and keep the original Z position
This view contains three layers, which can be selected by clicking the corresponding tab:
Tab Description
Correlation Displays the count rate trace, the correlation function, the photon
counting histogram, the pulse density histogram, and the result table,
see Correlation Tab [} 1246].
Fit Displays the fit graph, the residuals, and the used model, see Fit Tab
[} 1250].
Info Displays the defined name, any typed in comment, and some of the
meta data, see Info Tab [} 1256].
If you close an FCS document without saving, you will be asked in the Close image dialog, if you
want to save the data.
1 2
In the Count Rate diagram the count rate(s) (CR) in kHz is plotted vs. running time. If a cross
correlation set-up is used, the count rate trace for each channel is displayed.
Zoom into the diagram by pressing the left mouse button and drawing a rectangle of the area
you want to zoom in. After disengaging the button, the zoom image is displayed.
Clicking the right mouse button within the diagram opens the Count rate context menu.
Parameter Description
Reset diagram Resets to the original image size when zoomed in.
zoom
New cut region When raw data files are available, define cut regions by choosing the
New cut region option. The cut region size can be adapted by two
sliders. The cut out region is thereby displayed as a matted box. Select
independent cut regions for different channels of a cross correlation
experiments. In cross-correlation calculations, a cut region in one
channel will automatically define the same cut region in the other
channel. You can select more cut regions by repeatedly choosing this
option. These regions may overlap.
Remove cut region Removes the last cut region. By repeatedly choosing this option, the
cut regions in the reverse order of their creation will be removed.
Copy text to clip Copies the diagram coordinates into the clipboard, from which they
board can be pasted into other programs like Excel.
Write text to file Stores the diagram coordinates in a .txt file. You are prompted to
choose a name and a folder before saving.
Copy graphics to Generates an image of the graph with the legend text and color
clip board coded graphs which can copied to standard windows documents.
Line width Allows to select the thickness of the graph. Six settings are possible.
This diagram shows the frequency plotted against the times elapsed between two subsequent
pulses (or: photons recorded from the detector). To obtain this histogram, the time distance be-
tween two photons is evaluated using the raw data.
Zoom into the diagram by pressing the left mouse button and drawing a rectangle of the area
you want to zoom in. After disengaging the button, the zoomed area of the graph is displayed.
Clicking the right mouse button opens the Pulse Distance Histogram context menu:
Parameter Description
Reset diagram Resets to the original image size when zoomed in.
zoom
Copy text to clip Copies the diagram coordinates into the clipboard, from which they
board can be pasted into other programs like Excel.
Parameter Description
Write text to file Stores the diagram coordinates in a .txt file. You are prompted to
choose a name and a folder before saving.
Copy graphics to Generates an image of the graph with the legend text and color
clip board coded graphs which can copied to standard windows documents.
Line width Allows to select the thickness of the graph. Six settings are possible.
The diagram shows the correlation functions for each activated channel.
Zoom into the diagram by pressing the left mouse button and drawing a rectangle of the area
you want to zoom in. After disengaging the button, the zoomed area of the graph is displayed.
Clicking the right mouse button within the diagram opens the Correlation context menu:
Parameter Description
Reset diagram Resets to the original image size when zoomed in.
zoom
Copy text to clip Copies the diagram coordinates into the clipboard, from which they
board can be pasted into other programs like Excel.
Write text to file Stores the diagram coordinates in a .txt file. You are prompted to
choose a name and a folder before saving.
Copy graphics to Generates an image of the graph with the legend text and color
clip board coded graphs which can copied to standard windows documents.
Line width Allows to select the thickness of the graph. Six settings are possible.
This diagram (also called Photon Counting Histogram) shows the frequency plotted against the
photon number in a certain time bin. To obtain this histogram, the number of pulses (or: photons
recorded from the detector) in a moving time window are recorded and included in a histogram.
Determine the binning when loading a *.fcs file with raw data saved along with it using the
Reload button.
Zoom into the diagram by clicking+holding the left mouse button and drawing a rectangle of the
area you want to zoom in. If the button is disengaged, the zoomed area of the graph is displayed.
Clicking the right mouse button opens the Photon Counting Histogram context menu:
Parameter Description
Reset diagram Resets to the original image size when zoomed in.
zoom
Copy text to clip Copies the diagram coordinates into the clipboard, from which they
board can be pasted into other programs like Excel.
Write text to file Stores the diagram coordinates in a .txt file. You are prompted to
choose a name and a folder before saving.
Copy graphics to Generates an image of the graph with the legend text and color
clip board coded graphs which can copied to standard windows documents.
Line width Allows to select the thickness of the graph. Six settings are possible.
The Result Table below the diagrams displays the measuring results. The width of the columns
can be changed by moving the border lines. The order of the columns can be changed. For this
purpose, click on the head line of the relevant column, hold down the mouse button and move
the column to the required position. When the mouse button is released, the column is inserted in
the new position. A scrollbar at the bottom of the table allows one to view all parameters that
might not fit within the width of the table. A scrollbar on the right allows access to all repetitions.
1. Select a line in the table by clicking on it with the mouse (multiple choice is possible by
pressing the Shift or Ctrl key additionally).
à Selected lines are highlighted in color and displayed in the legends of the diagrams.
à Corresponding graphs are shown color coded in the graphic displays.
2. Select lines and define properties of the table by pressing the right mouse button, when the
cursor is within the table.
à The Result Table context menu opens offering different options.
Parameter Description
Select all Selects all measurements (rows) in the table regardless which line is
highlighted.
Parameter Description
Select all channels Selects all rows belonging to the same channel of a repetitive mea-
surement as the highlighted row.
Select all repeti- Selects all rows belonging to the same repetitive measurement as the
tions highlighted row.
Select all positions Selects all rows belonging to the same measurement position as the
highlighted row.
Select all kinetic Selects all rows belonging to the same kinetic time point as the high-
indices lighted row.
Cut Stores in the clipboard the highlighted rows. Only if the data are
pasted in a new window, the data are deleted from the old one.
Paste Pastes rows currently stored in the clipboard into the table.
Copy text to clip- Copies the table contents into the clipboard, from which they can be
board pasted into other text programs.
Write text to file Stores the diagram coordinates in a .txt file. You are prompted to
choose a name and a folder before saving.
The Fit tab provides access to the tools which allow to work with newly generated or already ex-
isting data for data analysis.
1 2
10.7.5.1.2.2.1 Diagrams
Parameter Description
Reset diagram Resets any zoomed image
zoom
Copy text to clip Copies the diagram coordinates into the clipboard, from which they
board can be pasted into other programs like Excel
Write text to file Stores the diagram coordinates in a .txt file. You will be prompted to
choose a name and a folder before saving.
Copy graphics to Generates an image of the graph with the legend text and color
clip board coded graphs which can be copied to standard windows documents
Line width Allows to select the thickness of the graph. Six settings are possible.
View measured Displays the measured Correlation or PCH curve in addition to the Fit
data curve. If this option is not selected, only the Fit curve is displayed.
Show fit range Displays the start and end values of the fitted data of the correlation
text curve defined by the red and blue bars. The positions of the bars can
be adjusted by drag and drop. Start Channel and End Channel deter-
mine which part of the correlation curve should be fitted to the
model. The start and end position of the channels are displayed as
correlation times (in µs).
This option and the bars are only available for the Fit Correlation dia-
gram.
Scaling Opens the Diagram scaling window . Enter the required percentage
value (from 1 to 100) for the scaling and click on OK to rescale, Can-
cel to keep the old value. The scaling of the G(t) axis is adjusted.
This option is only available for the Fit deviation diagrams.
The Channel area within the Fit View tab shows the active channel (name and color), for which
the Fit and Fit All buttons apply. If more than one row is selected (shown in blue color in the re-
sult table) the active channel is the channel in the last selected row of one or multiple selected
channels.
4
5
6
7
8
9
Fig. 117: Fit tab, Channel area
2 Define... button
7 Parameters list
Parameter Description
Model Drop down list contains all previously defined and stored fit models.
Selecting a model by clicking onto the name in the list loads the
model and displays a set of parameters which can be individually
adapted when applying the fit model to the data set(s). The parame-
ters are updated once the fit model is applied to the data.
Define Click to define a new Fit model. For a detailed description of the avail-
able parameters, see Define Model Dialog [} 1262].
Link Links the parameter globally. Type the required letters, separated by
comma.
§ M: links the parameter over different measurements
§ K: links the parameter over kinetic indices with the same time
points
§ P: links the parameter over positions obtained from measurements
at the same site
§ R: links all repetitions of one measurement
§ C: links the parameter for the same channel
Alternatively, select the link(s) from the drop down list opened when
clicking onto the drop down arrow.
If a linkage is activated that does not apply for a measurement, it is
disregarded. Otherwise the same rules are in place as for Fit and Fit
all in terms of to which data rows the links apply.
Upper Limit Defines the upper limit value tolerated from a fit; if this value is ex-
ceeded, the fit is rejected and another (global) maximum is searched
for. The default value depends on the parameter.
The Upper Limit parameter value is only accessable if the Show Lim-
its... option is selected within the interactive selection list opened
with a right mouse click into the parameter panel.
Lower Limit Defines the lower limit value as a fit parameter; if this value falls short,
the fit is rejected and another (global) minimum is searched for.
Change the value by editing the default value in the edit box. The de-
fault value depends on the parameter, but is in most cases 0.
The Lower Limit parameter value is only accessable if the Show
Limits... option is selected within the interactive selection list opened
with a right mouse click into the parameter panel.
Parameter Description
§ Start assigns a start value to the parameter and leaves the parame-
ter free to fit. In this case, no initial guesses will be made.
Value Displays the currently assigned value of the parameter. Change the
value by editing the default value in the edit box..
§ The parameter settings are not stored with the experiment or the fit model. They need to be
defined anew for each fit procedure.
§ A text in the state display panel warns in yellow writing on any constraints or errors of the de-
fined fit model and suggests suitable changes. For example, whenever some inconsistencies
are present, e.g having two parameters free that depend on each other, like the geometric
factor and the number of molecules, the system gives a warning about the mistake.
Info
Generally, it is accepted that non-linear fitting procedures yield more reliable results when the
number of free floating parameters is low. It is recommended to fix parameters which are
known from independent measurements. Good candidates for fixing are diffusion times of the
free dye and the free (i.e. not bound) partner, which can be determined in previous measure-
ments. Another good candidate is the structural parameter that is an instrumental parameter.
The quality of the fit is displayed in the chi2 display box of the Fit table. The χ2 (chi2) value should
approach zero for highest quality. The range of the data to be used for fitting can be defined by
re-positioning the red and blue bar originally set at the beginning and end of the correlation dia-
gram to the required start and end range positions. For the then next Fit procedure then new
range is applied.
Lower and upper limits are only displayed if selected. To select, click with the right mouse button
into the result table to open the context menu.
Parameter Description
Fit Performs a fit for all highlighted measurements (rows) according to
the loaded model. When the fit is completed, the free parameters are
replaced with the new fitting results and the fit graph and result table
are updated.
Fit all Performs a fit for all measurements (rows) that have the same channel
as the one displayed in the Channel display field (active channel). All
other channels are ignored. Changing to a row with a different chan-
nel loads the last assigned model for that row.
Parameter Description
To Method This function is not supported.
The data result table in the Fit view displays the values of the fitted parameters. It is updated
when clicking the Fit button in the Model panel. Measuring rows in the Result table can be acti-
vated/deactivated by checking/unchecking the corresponding check boxes. Deactivation of these
check boxes will not exclude the relevant rows from all subsequent evaluation procedures, but
only from the average. Average curves are updated in respect to the rows taken into considera-
tion.
Info
The complete experiment settings, including all multidimensional acquisition settings, are
stored with the experiment when it is saved.
The panel shows apart from the acquisition date and the pinhole size only information
about the single measurement parameters.
See also
2 Fit Action Buttons [} 1255]
The Correlation action tab lists all correlation channels with their assigned channel color includ-
ing cross correlation channels.
2 Diagrams
Choose the diagrams to be displayed by activating the corresponding buttons. Selected
diagrams will be highlighted in blue. By clicking on the appropriate button, the diagram
can be toggled between ON and OFF.
The Table action tab provides several options to sort and display the data.
1
4 Parameters list
Only parameters set active are visible in the table. The order of the displayed parameters
can be adjusted using drag and drop in the table itself. Deactivation of these check boxes
excludes the relevant rows from all subsequent evaluation procedures like averaging.
These settings will be stored when saving the data. Immediate reactivation is possible
when activating the check box. The scroll bar allows viewing all content of the display
box.
Whenever a *.FCS file is opened and raw FCS data (*.RAW files) have been saved during acquisi-
tion along with the *.FCS file, the Reload ... button is available in the Correlation tab in the FCS
document view.
The tool allows to redefine several parameters for
§ Correlation [} 1258]
§ Count rate [} 1260]
§ PCH (Photon Counting Histogram) [} 1260]
§ Electronic dust filter [} 1261]
The panel opens with the default settings which are applied for the original data acquisition.
10.7.5.1.2.4.3.1 Correlation
The Correlation tab allows to specify how the raw data will be processed for correlation analysis.
1
2
3
4
4 Default button
Resets all parameters to the default values. Please note that with the default settings, the
algorithm works the fastest.
The Count rate tab allows to specify the binning time used for the count rate trace.
1 Automatic checkbox
Activates dynamic binning.
3 Default button
Resets all parameters to the default values. Please note that with the default settings, the
algorithm works the fastest.
When Automatic is active, the system averages three data points and rebinds the data in depen-
dence of the measurement time. The diagram will have a mean with fluctuations above and be-
low the mean.
In constant binning, data points are not averaged. This will result in a baseline with fluctuations
above.
In (automatic) dynamic binning the count rate trace will be adjusted to the measurement length.
The count rate trace represents the averaged binned count rate versus measurement time, in
other words photons/second therefore intensity.
The bin window in dynamic binning now depends on the measurement time, whereby the bin
window doubles, if 500 values are exceeded.
In the first step, the binning time is 3.2 ms. For the next steps, the binning time (tr) becomes tr =
3.2 ms x 2n. The measurement time (td), at which the binning time doubles is calculated according
to td = 3.2 ms x 500 x 2n.
10.7.5.1.2.4.3.3 PCH
The PCH allows to specify the binning time used for the photon counting histogram.
1 Automatic checkbox
Activates dynamic binning.
When Automatic is active, 32 different binning times will be used.
3 Default button
Resets all parameters to the default values. Please note that with the default settings, the
algorithm works the fastest.
Info
In automatic binning mode, binning starts with a value of 50ns, which is doubled 32 times. So
binning times are 50 x 2n, with n=1 to 32. The histogram with the best dynamic range (three
standard deviations) will be selected and displayed.
The Dust filter tab allows to activate an electronic dust filter. Define a threshold in the Count
rate intensity that, if exceeded, will lead to a removal of the corresponding count rate region
from the correlation analysis.
1
2
3 Default button
Deactivates Dust filter and resets the values to default.
Note that the cut off count rate is defined as a value exceeding the average count rate during a
certain measurement period (bin window) by a certain percentage. Thus, the consecutive fast suc-
cession of low peaks might accumulate the same count rate as one high peak within a certain pe-
riod of time and hence, the cut off is not defined by the peak height but rather by the counts per
binning time.
If the integrated count rate over a certain count interval exceeds the average count rate by that
threshold, this special interval is discarded for correlation analysis. For example, if the system de-
tects a count rate in a certain time interval that exceeds the average count rate by over 30% and
the threshold was set to 30, this interval will be discarded for correlation analysis. The time inter-
vals before and after the discarded region are separately correlated and the results averaged. This
holds also true, if more than one region is discarded. In this case all the single regions that are
separated by cut out regions are separately correlated and the resulting average is displayed.
Note, that calculation of an average will be performed at the beginning of the measurement. If
peak count rates will come at the beginning, this kind of dust filter does not work. Also, due to
the necessity to average signals over a certain integration time, more than only the peak area will
be discarded. Another outcome of the necessity to average the count rate signal is that several
small peaks following close to each other will be treated as a huge peak and might be cut out.
This means, in the Automatic cut mode accumulated count rates rather than peaks are removed.
For cross-correlation experiments, any of the regions discarded in either autocorrelation function
will not be used. Cut off regions are framed by stippled boxes and appear matted in the Count
Rate window.
Parameter Description
Correlation Create a correlation model from predefined equations, which will be
fitted analytically, see Generating a Correlation Model.
PCH Create a photon counting histogram model, which will be fitted nu-
merically, see PCH Tab [} 1263].
Formula Create a user defined model equation, which can be fitted analyti-
cally, see Formula Tab [} 1264].
This tab show the options for assembling a correlation model. Correlation allows assembling a
model with predefined terms.
2 Terms list
Shows selected terms of the currently active model. The assembled equation is displayed
in the G(τ)=1 display area.
A new model can be defined by activating the requested equation terms and defining
the corresponding settings for each term.
The PCH tab within the Define Model tool is used to determine concentrations and the molecu-
lar brightness of molecules.
PCH can be fitted only one-dimensional. If a cross-correlation measurement is activated, the PCH
model is automatically replaced by a correlation model.
1
2
1 Background
§ Type in a value if known.
§ Keep the value to zero, if no background is expected.
§ The background is no fit parameter.
2 Components
Select numbers of components (1, 2 and 3) by clicking the Components 1, 2 or 3 but-
tons. The number of active components will be highlighted in blue.
3 Specific brightness
Type in a brightness value (Hz) specific for the chosen number of components beneath
the Components button activated. If you do not know the brightness, keep the value at
1.
4 Instrumental parameters
Type in values in the respective first order, second order and third order edit boxes.
These values have to be determined in independent calibration experiments using a de-
fined dye solution with a known brightness. The parameters correct for the deviation of
the confocal volume from a true Gaussian distribution. Normally, only a correction for
the first order parameter is necessary and recommended. Hence, in the calibration fit
keep the first order parameter free and the second and third parameters fixed to zero.
Enter the determined first order number and save it with the model for later measure-
ments.
2 Keyboard
Used to type the required formula.
4 Return button
Deletes the entered formula.
5 Description panel
Displays any expected operation and syntax errors.
Note that:
§ All variables defined at the beginning are considered fixed variables, all other variables are
considered fit parameters.
§ Each defined variable must have an assigned number. This number will substitute for the vari-
able in the equation.
§ Always start a formula with G as the dependent variable in the form of G(x)=, x being any ara-
bic letter.
§ The independent variable x is considered to have the SI unit [s] and the same holds true for
any dependent parameter to it.
§ If there are no syntax errors, parameters and operations will be indicated.
The acquired correlation functions must be fitted to models in order to retrieve meaningful re-
sults. It depends on the biochemical process, which model is the most appropriate. If the underly-
ing process is known, the model can be selected prior to the start of the experiment. For example,
if diffusion in a membrane is the subject, a 2-D diffusion model should be applied. In other cases,
the process is not known, for example for free or anomalous diffusion. In this case, one can
screen different potential models and look for the best fit taken into account the χ2 value. Often
two models work nearly equally well, for example, a two component free diffusion model can
give you as satisfactorily a fit as a one component anomalous diffusion model, and without prior
knowledge about the system it will be impossible to decide, which is the better one. In principle,
models can be excluded if the fit does not work. However, a working model is only a potential
candidate but does not signify it to be the correct one. Care should be taken to minimize the free
parameters as much a possible to improve on the fit quality. It is not advisable to fit to three com-
ponents without fixing the parameters of at least one of them. For example, if the diffusion time
of a free ligand can be determined in a pre-experiment, that value should be fixed to reduce the
number of floating parameters for the evaluation of the binding experiment to its receptor.
The FCS software is designed to be flexible. That means that the user can define or assemble
equations which are useless. Care should be taken and formulas should be compared to the ones
known from literature to obtain meaningful results. Also, the presence of a model does not neces-
sarily mean, that the quality of the recorded data allows its usage. For example, anti-bunching re-
quires a lot of care in data acquisition like long measurement times and cross-correlation to re-
duce dead times of the detectors and elimination of after-pulsing artefacts. It is in the responsibil-
ity of the user to set up his or her experiments accordingly.
or
where denotes the time average and describes the fluctuations around
the mean intensity.
For long time average of 1 (no bleaching) the following relation exists:
Note that the software for FCS calculated GI functions, which do therefore converge to 1 and not
0. The acquired correlation functions are than compared to model equations.
where d is the offset, B the background correction, A the amplitude and Gk,l(τ) the correlation for
a single process. The suffixes k and l signify correlation terms for dependent and independent pro-
cesses, respectively, that are multiplied with or added to each other.
The total correlation is therefore the amplitude multiplied to the product of the single correlation
terms that are dependent and hence convolute each other. This amplitude has to be corrected for
background and any offset. In cases, when the processes are independent from each other, the
single correlations terms add up, for example in cases where there is more than one component
all bearing the same label or of bunching terms that are independent from each other. If indepen-
dent and dependent processes are present, all independent terms will add up and are multiplied
with the dependent terms.
One can distinguish between different classes of fluctuation processes: anti-bunching, bunching
and diffusion.
10.7.5.1.2.5.4.1 Amplitudes
The amplitude of the correlation function is influenced by the offset, background and the number
of particles in dependence of the geometric factor. The amplitude is also influenced by the
process of correlation.
The "1"
In a normal correlation, the curve converges to 1, in case intensities I are correlated as is the case
with the FCS software. Note that in other cases, if fluctuations Iδ are correlated, the correlation
function converges to 0, if no bleaching occurs.
You can therefore easily convert GI(τ) to GδI(t) values by adding a fixed offset of –1.
Offset d
Background B
Amplitude A
where γ is the geometric factor accounting for the point spread function (PSF) and N the mean
number of particles.
In the FCS software γ can be a fit or a predefined fixed value. In case γ is a fit value N must be
fixed in the fit procedure. N is normally a fit parameter.
Please note that γ takes different values for different fitting models depending on the assumed in-
tensity distribution of the point spread function (PSF):
γC=1.000 (cylindrical)
γ2DG=0.500 (2-D Gaussian)
γ3DG=0.350 (3-D Gaussian)
γGL=0.076 (Gaussian-Lorentzian).
γ can also be calibrated, if a known concentration c of a dye is measured. In this case N can be
fixed and γ fitted. The obtained number can be entered as the calibrated fixed number. N can be
calculated from equation
with V being the confocal volume and LA = 6.023 x 1023 mol-1 the Avogadro number.
The volume V is calculated from equation
with ωz axial focus radius and ωr the lateral focus radius. The radii themselves have to be deter-
mined by a calibration measurement using a dye with a high quantum yield and a known diffu-
sion coefficient D from the fitted diffusion time τd and the structural parameter S employing a free
diffusion model with triplet state.
The following relations exist:
Equations 5e or 5f, dependent on the excitation source, can be used to retrieve ωr; with its knowl-
edge ωz can be calculated from equation 5g.
"N" can have different meanings in different fit models. For biology, normally the number of dif-
fusing particles is of interest. In this case, if photo-physical processes (triplet, blinking, stretched
exponentials) are involved, it is recommended to use their normalized forms, since then the num-
ber of molecules correspond directly to the number of diffusing particles. If photo-physical terms
are not normalized, the number measured is the total number of diffusing particles and those un-
dergoing photo-physical processes.
Anti-bunching is the phenomenon that a molecule cannot produce emitted photons as long as it
stays in the excited state. Hence during the transition time required to drop back to the ground
state, which corresponds in most of the cases to the lifetime if no other photo-physical processes
are involved, no photon can be expected, which results in anti-correlation and hence a drop of
the correlation function below 1.
where C is the amplitude and τa the transition time, also referred to as the lifetime.
where C is either a fit parameter or a fixed value and often takes the value 9/5.
There are two cases to be distinguished: First, if the anti-bunching is independent with other pro-
cesses, than equations 6a and 6b in the non-normalized or normalized form must be used and the
terms are multiplied with other correlation terms. In case the anti-bunching is treated dependent
to other processes, than equation 6c is the correct one to use and the term is added to other cor-
relation terms.
not normalized
normalized
where K1 is the fraction of molecule, and τk1 the exponential decay time, k1 the frequency factor
and κ1 the stretch factor.
K1 and τk1 are fit parameters; k1 is a fixed parameter and must be user defined; κ1 is either a fit pa-
rameter or can be fixed.
Note, fixing k1 and κ1 to "1" results in a simple anti-bunching term.
not normalized
normalized
where K1 and K2 are the fractions of molecules, and τk1 and τk2 the exponential decay times, k1
and k2 the frequency factors and κ1 and κ2 the stretch factors.
K1, K2, τk1 and τk2are fit parameters, k1 and k2are fixed parameters and must be user defined, κ1
and κ2are fit parameters or can be fixed.
Bunching is the phenomenon of a burst of photons during a certain time interval, the duration of
which is determined by photo-physical processes including triplet, blinking, flickering and proto-
nation. These terms are exponential decay functions. Formally, they look the same, only the expo-
nential decay might be different.
Triplet
not normalized
normalized
where Tt is the triplet fraction, that is the number of molecules undergoing triplet states and τt the
triplet decay time.
Tt and τt are fitted parameters.
Triplet is based on an un-allowed intersystem crossing from the excited to the so-called triplet
state. This state lasts for 1 – 5 µs. If the electron drops back to the ground state, no photon is
emitted and hence during the triplet state the molecule is in a dark state. Triplet is indicated as a
rise in the correlation amplitude, which is indicated as a deviation from the flattening curve at
shorter correlation times. If not normalized, the triplet fraction contributes to the total number of
molecules.
Blinking
not normalized
normalized
where Tb is the blinking fraction, that is the number of molecules in the dimmer state and τb the
blinking decay time of the dimmer state. Note, if the blinking term is not normalized, the number
of blinking molecules will influence the total number of molecules.
Tb and τb are fitted parameters.
Blinking is based on the phenomenon that the electron distribution over conjugated systems can
change in dependence on the local environment, for example changes in the pH, which will lead
to molecules in a bright and dim or dark state. It is therefore a kinetic process that can be de-
scribed in the following way with the following relations:
Note that Blinking is referred to a process that does not lead to a covalent modification in the
chemical bonds. If covalent changes occur the process is referred to as Flickering, which is for-
mally treated in the same way.
not normalized
normalized
where T1 and T2 are the fractions of molecules in the triplet state, and τt1 and τt2 the triplet expo-
nential decay times.
T1, T2, τt1 and τt2are all fitted parameters.
not normalized
normalized
where T1 and T2 are the fractions of molecules in the triplet state, and τt1 and τt2 the triplet expo-
nential decay times.
T1, T2, τt1 and τt2are all fitted parameters.
not normalized
normalized
where K1 is the fraction of molecule, and τk1 the exponential decay time, k1 the frequency factor
and κ1 the stretch factor.
K1 and τk1 are fit parameters, k1 is a fixed parameter and must be user defined, κ1 is either a fit pa-
rameter or can be fixed.
Note, fixing k1 and κ1 to "1" result s in a simple bunching term.
not normalized
normalized
where K1 and K2 are the fractions of molecules, and τk1 and τk2 the exponential decay times, k1
and k2 the frequency factors and κ1 and κ2 the stretch factors.
K1, K2, τk1 and τk2 are fit parameters, k1 and k2 are fixed parameters, κ1 and κ2 are fit parameters or
can be fixed.
This term is often required to fit protonation, with the second stretch factor and the frequency
factors are set to "1".
Diffusion is driven by Brownian motion. We can distinguish translational, rotational and flow dif-
fusion.
Rotational diffusion
In the most general form, rotation can be described as the sum of 5 exponential terms
where Ra is the amplitude, cm the relative amplitude, rm the frequency factor and τr,m the rotational
diffusion time.
However, there are special cases that are of more use. In symmetric rotation, the general formula
reduces to:
not normalized
normalized
where Ra is the rotational amplitude and τr the rotational diffusion time.
Ra and τrare fit parameters.
If rotation occurs dependent from other processes, the formula used as an additive term is de-
fined as:
Ra is either a fit parameter or a fixed value and often takes the value 4/5.
In case of asymmetric rotation, the term is as follows:
not normalized
normalized
where Ra is the amplitude, c1 and c2 are relative amplitudes, r1 and r2 frequency factors, τr,1 and
τr,1 rotational diffusion times.
Ra, τr,1 and τr,1 are fitted parameters, c1, c2, r1 and r2 are fixed values and must be user defined.
Rotational frequencies often take the following values:
r1 1
r2 10/3
The relative amplitudes dependent on the polarization of the excitation light and the analyzer in
the emission beam path and are as follows:
c2 64/9 4 4 16/9 1 1
Translational diffusion
In its general form, translational diffusion is defined as:
with
the constraint
where τd,i is the diffusional correlation time of molecule species i, S the structural parameter that is
the ratio of axial to lateral focus radii, αi the anomaly parameter or temporal component of mole-
cule species I, ed1and ed2 are fixed values and have to be user defined. The following values define
1-, 2-and 3-D diffusion:
1 0 2-D
1 1 3-D
Note that in the FCS software these values are automatically selected with the choice of dimen-
sionality.
S is either a fit parameter or a fixed value. It is an instrumental parameter and can be determined
by a calibration experiment using a dye solution with a known diffusion as a fit result.
αi is either a fitted value for anomalous diffusion or a fixed value (set to "1") for free diffusion. The
following relation exists:
α Diffusion process
=1 Free diffusion
Note that αi is set automatically to "1", if free diffusion is selected. If anomalous diffusion is se-
lected, the parameter will float.
τd,i are fitted parameters. They can be converted to diffusion coefficients Di using formulas 5e or
5f. The FCS software allows you to directly fit to Di values, but in this case the lateral radius ωr has
to be specified as a fixed value.
Please note that in the case of anomalous diffusion the following relations exist:
If activating the fitting to the diffusion coefficient the Γ values have to be calculated from the D
values by the following conversion:
The Φi values are fit parameters. They account for different brightness of different components. In
principle, if molecules of different brightness are present, the apparent molecular brightness is de-
fined as
where ηiis the brightness of the molecules in kHz or the dimensionless relative brightness values.
The brightness of the species has to be determined beforehand in control experiments.
Note that the brightness contributes as the square to the correlation function, in other words a
double as bright molecule will contribute 4 fold more. Therefore, the fitted number of molecules
must be corrected to obtain the real number Ndiff of diffusing particles; please note that to obtain
the diffusing particle number directly, other terms should be used in their normalized form:
If one wants to know the true fraction fi of each species, values those can be retrieved with the
known brightness from the relation
Flow
Flow signifies active transport either via cytoplasmic movement or directed transport.
If flow occurs in the absence of translational diffusion, the term is defined as follows:
with τf representing the average residence time for flow and τd the diffusion correlation time.
Note, in the FCS software, the correct term is automatically loaded in dependence on the absence
or presence of a translational term.
With the knowledge of the lateral radius ωr given as a fixed value, the software allows to fit di-
rectly to the velocity v instead of the average residence time. The following relation exists:
Terms
Start the definition of a model by activating the checkbox of the term you want to include into
the final correlation equation. Deactivate the checkbox to remove the term from the final equa-
tion. All activated terms are assembled into the final equation that is displayed in G(τ)= display
area.
Choose from:
Term Description
Offset d Deviation from 1
Triplet G t(τ) Exponential correlation terms for triplet state, blinking, flickering or
other bunching correlation terms
Settings
For a selected term the Settings button becomes available. The Settings panel allows you to
choose specific equations and assign values to parameters, if applicable. The Settings Descrip-
tion box provides information on the parameter or displays the formula of the equation. In addi-
tion it provides useful conversions of the fitted parameter to other interesting parameters. For
more information the used formula see Model Equations [} 1265].
The following settings are available:
Term Settings
Offset d § Set the offset to 0 by clicking the Normalized button.
§ Define the offset by clicking the Calibrated button
Type in a value into the edit box or set a value using the scroll ar-
rows.
§ The offset can have positive or negative numbers and the value
specified will be added to all correlation values.
§ The description panel provides information on the offset.
Amplitude A § Set a value for the geometric factor γ, which describes the point-
spread function.
§ By clicking the cylindrical, 2DG, 3DG and GL buttons a cylindrical,
2 dimensional Gaussian, 3 dimensional Gaussian (3DG) and Gauss-
ian-Lorentzian (GL) PSF can be set, respectively.
§ click the Calibrated button to type in a user defined number into
the edit box or use the scroll arrows to set a value.
§ The description panel provides information on the amplitude.
§ At least one the γ factor or the number of molecules N have to be
fixed in the fit procedure.
Triplet G t(τ) The triplet represents bunching terms that are exponential decay
functions.
§ Choose between normalized and non-normalized triplet term by
activating/deactivating Normalized.
§ Choose, whether the bunching terms should be weighted or not by
activating or deactivating Weighted. If no weights are applied, the
fit follows the measured curve to minimize χ2. If no boundary val-
ues are set for the relaxation times, noise might also be followed
Term Settings
and triplet fractions might show up to be too high. If weights are
applied, noise is followed less. The following weight equation is
used:
Several options are available for the bunching terms. Select the rele-
vant term from the Components drop down menu:
§ Triplet: 1 exponential function
§ Blinking: 1 exponential function
§ Independent Triplet and Blinking: Sum of 2 exponential functions
§ Dependent Triplet and Blinking: Product of 2 exponential functions
Translation Gd(τ) The following possibilities for parameter settings are available:
§ Fit to fractions (normally used when no brightness differences are
observed between different components) or to fractional intensi-
ties, by activating Fractional Intensities. Type in the absolute or
relative brightness values of the components into the Molecular
brightness edit box.
§ Fit to the diffusion time, or directly fit to the diffusion coefficient by
activatingDiffusion coefficients. Type in the radial dimension of
the confocal volume into the ωr edit box or use the scroll arrows.
Term Settings
§ In case two photon excitation is used activate 2 Photon since this
will influence the fit formula in the case the Diffusion coefficients
option was chosen.
§ Select the number of components (1, 2 and 3) by clicking the Com-
ponents 1, 2 or 3 buttons. The number of active components will
be highlighted in blue.
§ Select free/anomalous diffusion in the dropdown down menus
beneath each component button.
§ Set the diffusion Dimension in the dropdown menus beneath each
component button. Select between 1-D, 2-D and 3-D.
§ Enter brightness values in the Brightness edit boxes for each com-
ponent. These values are only displayed, if the Fractional Intensi-
ties option is selected. Type in absolute values (that must have the
same units for all components) or relative values.
§ The description panel provides information on the translational dif-
fusion terms.
Flow Gf(τ) § Determine, whether you want to fit to the diffusion time or directly
to the velocity by activating Velocity. Type in the radial dimension
of the confocal volume into the ωr edit box or use the scroll ar-
rows.
§ The description panel provides information on the flow terms.
§ Note, that the system automatically toggles between the pure flow
and flow in combination with translational diffusion in cases where
the Translation term is deactivated or activated, respectively.
Stretched exponen- § Choose between bunching and anti-bunching terms by clicking the
tial Ge(τ) Bunching or Antibunching buttons, respectively.
§ Choose between 1 (mono exponential) or 2 (double exponential)
stretched exponential terms by clicking the Components 1 or 2
buttons. The active option will be displayed in blue.
§ Choose between normalized and non-normalized stretched expo-
nential term by activating/deactivating Normalized.
§ Choose between dependent and independent stretched exponen-
tials by choosing from the Components dropdown menu. This be-
comes only available, if two components are active.
§ The description panel provides information on the strethed expo-
nential terms.
§ Note, the frequencies and stretch factors are no fit parameters and
must be defined in the Frequencies k1 and k2 as well as the
Stretch factors κ1 and κ2 display boxes. Either enter a value or
click the 1 button for the default setting.
§ For two components, only the formula for dependent processes is
presently available.
10.7.5.2.2 FCS Measurement Setup for Labeled Molecules within Cultured Cells
10.7.5.2.3 FCS Measurement Setup for Labeled Molecules in Solution Using a Sample Carrier
Prerequisite ü Dye/Molecule dissolved in suitable solvent and at suitable concentration distributed into a
sample carriers wells.
ü High NA Objective selected for imaging.
ü ZEN Module Tiles & Positions is available on the system.
ü Focus point set into the solvent (about 100 to 200 microns above the coverslip into the sol-
vent) maybe using imaging techniques to check on emission signal.
ü FCS track is defined with suitable channel selection, laser line and power, and measurement
parameters for the actual FCS measurement.
ü Pinhole is adjusted for optimal FCS results.
1. Activate an LSM confocal track (or other imaging track).
2. Open Tiles tool.
3. Select the sample carrier matching the one used for the experiment.
4. Move the actual sample carrier to the reference point marked as yellow cross in the sample
carrier graphic and confirm with ok to close the panel.
5. Calibrate the sample carrier with the following steps:
- 1/4 -> no changes, click Next.
- 2/4 -> Click Set zero, click Next.
- 3/4 -> Chose Search Reference Point (1 Point) from the drop down list, click Next.
- 4/4 -> Set Current XY, click Finish.
6. Activate FCS track .
7. Open Tiles Viewer and zoom out to see the carrier in total.
8. Select the following tool from the Positions icons on the left: (Setup new positions
from an underlying sample carrier).
9. Mark the wells by clicking them individually (Click Ctrl ) or select by drawing a contour
around the relevant ones holding the left mouse button while moving the curser.
10. Keep the default settings: Distribute Positions by Number, Number = 1, and Bias =
None.
11. Click Plus to add one position per marked well.
12. Click Start Experiment to acquire an FCS measurement per selected well.
Info
4 Only one position per carrier well is supported.
4 Repeated measurements per well can be achieved combining this type of acquisition with
a Time series multidimensional acquisition.
4 Multiple Positions per well or tiles or combinations of tiles and positions are not supported
and will be deleted when the FCS track is activated.
4 If the sequence of the positions is changed after their initial definition the timely sequence
of the position measurement data can only be deduced from the time stamp in the result
table within the FCS document. IDs of positions and measurement data match.
1. Use the initially set laser power indicated in the Laser control panel in Imaging Setup or
Channels Tool.
2. Start a Continuos scan with an active FCS track and bring the Detector Counting tool
from the right tool area into view. A good starting point is a laser power which leads to a
count rate between 50 kHz and 200 kHz.
3. Change the laser power using the slider next to the active laser used for acquisition to in-
crease or decrease the values to achieve such a count rate.
4. When finished, stop Continuous scan.
Info
For most dyes, the Counts/Molecule setting should be optimized in a second step to a value
just under its maximum by adapting the laser power. If carriers of different slide thickness are
employed, the Counts/Molecule setting should be optimized by using the correction ring of
the objective. The correction ring is turned counterclockwise or clockwise until a maximum
value is obtained. The correction ring should also be used for adjusting the Counts/Molecule
setting whenever the immersion media is changed. This is especially important in cases where
the refractive index of the immersion media is different from that of the sample.
The pinhole is adjusted using a dye solution. For each excitation wavelength and MBS combina-
tion a suitable dye must be used.
We recommend:
§ Rhodamine 6 Green (Rh6G) or Alexa 488 for excitation lines 445, 488 and 514 nm
§ Tetra-Methyl-Rhodamine (TMR) or Alexa 546 for excitation line 543 nm or Alexa 568 for exci-
tation line 561 nm
§ Cy 5 or Alexa 633 for excitation line 639 nm
We recommend to work with a relatively concentrated solution (10-6 mol/l) and low laser power
to achieve intensity curves with low noise.
Prerequisite ü All steps from FCS Measurement Setup for Labeled Molecules in Solution [} 1279] up to
the step Pinhole Adjust have been performed.
1. Set the laser power to achieve above 100 kHz count rate, if possible.
2. Click Adjust in the pinhole panel of the Channels tool.
3. Click the Coarse button of X first and then Y to perform a coarse adjustment scan in x and
then in y.
à The pinhole will travel over the maximum range for each axis.
à The adjustment scan in y will use the optimum setting found for the x axis.
4. Perform a fine adjustment in x and y by subsequently clicking the Fine buttons.
à For the fine adjustment the pinhole will travel only a limited distance (about 10 microns)
around the peak determined in the coarse adjustment or around the current stored posi-
tion of the pinhole if the coarse adjustment was skipped.
5. If the pinholes are adjusted for the first time, the coarse adjustment must be performed first
followed by the fine adjustment, each time for x and y. For subsequent readjustments, the
fine adjustment is typically sufficient.
6. In case more lasers and detectors are selected, the adjustment scan will be done for all
lasers/detectors simultaneously.
7. When using the 405 nm laser it might be necessary to adjust the collimator for optimal
overlay in Z. This should be done in advance to the pinhole adjust procedure if necessary
(see Collimators Adjustment [} 1290]). The collimating lens is pre-set at delivery and is a
function of the wavelength and objective used. In case a new objective is used with the sys-
tem this value needs to be determined anew.
8. The values determined by the adjustment scan for the X and Y position of the pinhole(s) are
indicated by the position of the line in the graph as well as the slider and the number in the
edit box. In case this value is not accepted it can be changed by moving the slider or typing
in a different value.
9. Click finish to store the indicated X and Y values as general pinhole settings for the main
dichroic used. This value will now be used for the current and future FCS/FCCS measure-
ments.
It is recommended to adjust the pinhole for FCS measurements on a regular basis.
10. The adjustment scan can be performed separately for X and Y. The values can be stored in-
dependently as well. However it is recommended to adjust both values and store them
both.
The Z-Scan is especially suitable to position the confocal volume on a cell membrane. Due to the
shape of the confocal volume, the membrane should not be approached from the side but instead
rather from the top of the cell. Please note that it is better to use the upper membrane for mea-
surement, since the lower membrane might be too close to the glass bottom surface resulting in
disturbing reflections. Besides using the Z-Scan you can also position the membrane manually, al-
beit with less precision.
5. If no clear signal can be detected, or the peak of interest lies too close to the range ex-
tremes, take over the Z position most likely to represent the wanted one by moving the bar
to this position and clicking Finish.
6. Restart the Z-Scan at this Z position with an adjusted range and step width and take over
the then defined Z position for the FCS measurement.
Whenever Save FCS raw data file is activated in the Acquisition Selection/LSM of the Options
tool, the raw data file is saved along with the *.FCS file in the folder selected for saving the *.FCS
file.
*.RAW files contain the same name than the *FCS.fcs file with name extensions, which identify
the repetition and the channels.
The measurements can be saved individually using the standard save tools or by activating Auto
Save in the left tool area. The *.FCS file includes the whole data set (curves, fitting results, fit pa-
rameters).
Whenever a *.FCS file is opened the data show up in the Correlation view tab of the FCS docu-
ment (see Correlation Tab [} 1246]).
The Reload function (see Reload Tool [} 1258]) is only available if raw data have been saved
along with the *.FCS file.
*.RAW files are saved optionally when activating the storage of raw data in the options menu (see
Saving and Loading FCS Data [} 1284]). The *.RAW data files can be opened directly in ZEN. They
are also opened automatically if linked to a *.FCS file. In this case all raw data files associated with
the *.FCS file will be opened in the same FCS document.
Info
Raw data formats can also be opened in ZEN Black (all versions).
Info
The sampling rate is set to 15 MHz. Depending on the detector used for FCS measurements, a
maximum data rate of up to 16 Mbyte/sec is achieved.
Bytes Explanation
0-63 § represent the file identifier with the channel number
Linear (plane) polarized light, which is light whose wave goes only one direction, exciting a fluo-
rescent molecule with a preferred dipole orientation results in polarized emitted light. It provides a
contrast-enhancing method that is especially useful in the study of molecules that are fixed in
their orientation or are greatly restricted in their rotational diffusion. Anisotropy is directly related
to polarization and is defined as the ratio of the polarized light component intensity to the total
light intensity.
In polarization microscopy using LSM systems the sample is irradiated with vertical polarized light
(in respect to the optical table) from a laser source. The emitted fluorescence is passed sequen-
tially through emission polarizers (analyzers) that are positioned before the Quasar detector. They
transmit either the vertical (IVV) or horizontal (IVH) polarized emitted light onto the Quasar detector
(L format fluorescence polarization). Since the vertical component of the emission light is parallel
polarized to the vertical polarized excitation light, it is often also referred to as the parallel compo-
nent (P Pol, Ip, I||). Likewise, the horizontal polarized emission light is also designated as the per-
pendicular ("senkrecht" in German) component (S Pol, IS, I|). In the software the "p" and "s" desig-
nations are used.
Polarization P and anisotropy Polarisation P and anisotropy r are defined as: are defined as:
and with 0 ≤ r ≤ 1.
They can be interconverted to each other in the following way:
and
In a completely polarized sample (IS = 0) the anisotropy r = 1. In a completely non-polarized sam-
ple (Is = Ip) anisotropy r = 0.
The formulas for polarization P and anisotropy r as given above are strictly true only, if the optical
transmission for both emission polarizers are identical. Any differences must be corrected by intro-
ducing a correction factor G that is multiplied with Ip. Hence the anisotropy r in such a case is cal-
culated according to:
However, since in LSM systems the polarization of the excitation light can not be changed from
vertical to horizontal, G has to be determined with an isotropic fluorescent dye solution as the ra-
tio between the mean intensities Ip and Is, e.g. obtained from the histogram view.
Info
Please note that the G factor is not the mean intensity of the ratio calculation (R), where every
pixel is computed separately. It has to be calculated from the ratio of the mean intensities of
the Ip and Is images.
Info
If you want to calculate Anisotropy using the ratio imaging formulas provided by ZEN, then
choose P for Track 1 and S for Track 2.
ü Pol Anisotropy P Filter selected for Track 1 and Pol Anisotropy P Filter selected for
Track 2.
1. Acquire a time series multitrack image with the image in Track 1 showing the emission sig-
nal filtered by the P Anisotropy filter and the image in Track 2 showing the emission signal
filtered by the S Anisotropy filter.
2. Open MeanRoi View .
3. Open Ratio action tab.
à The G factor is shown as mean intensity value of the Ratio_R1 channel in the intensity
chart.
The G-factor is therefore defined as:
3. In MeanROI View choose the Ratio Type 3 formula in the Ratio tab.
8. Use source 1 (S1) being the P track image (Ch1 T1) and source 2 (S2) the S track image
(Ch1 T2) with consideration of the G factor in Parameters.
Use the System Maintenance and Calibration dialog within the Tools menu to check on es-
sential parameters of the LSM.
§ The scanfield test provides information on the quality of the scanner calibration. If this quality
repeatedly fails although the scanners have been calibrated, it might be necessary to ex-
change the scanners. Contact your local ZEISS service engineer.
§ The sharpess test provides information on the resolution contrast. This parameter is depen-
dent on the overall alignment of the system and the quality of the detectors. If detectors de-
teriorate then this test failing might be an indication for a necessary exchange of a detector.
The position of the collimator(s) for lasers coupled to the invis port can be adjusted if needed. The
position is dependent on the wavelength and the objective used. The collimator position is a func-
tion of the system and the position is not part of an experiment and hence cannot be reloaded
with an experiment or re-used with an image.
For tunable Multiphoton lasers it might be necessary to adjust the collimator for a given wave-
length and objective as the laser is set up together with the LSM first time at the customers.
Use the following parameters to adjust the collimator.
Parameter Description
Collimator Chose the collimator you want to adjust. In case no Multiphoton laser
lines are available only the collimator for 405 nm laser is accessible.
Position Use the slider or the editing box to change the position of the collima-
tor. Check the image of a continuously scanned sample (i.e. LSM cali-
bration objective) for highest intensity. In case you want to align the
collimator for optimal overlap with vis lasers, perform for example a
reflection line scan with vis laser and focus for highest signal. Then
adjust the collimator of the invis laser for highest image intensity. If
both reflection images show highest intensity the excitation level
matches.
Store Current Pos Stores the current position as the default position for the selected col-
limator.
Move to Stored Moves the position of the selected collimator to a stored value for the
Pos given wavelength and objective. Stored positions are automatically set
when an objective is changed or the laser is tuned (Multiphoton
laser). The current non stored settings are kept for an experiment if
the laser is not tuned or the objective is not changed.
The position of the pinhole (or rather the laser beam in X-Y-coordinates) in relation to the detec-
tor makes a major contribution to image optimization.
The pinholes have already been adjusted at the factory. These settings are taken over for active
operation when a standard configuration is loaded.
If you want to create a setting that differs from the standard configurations, use the following pa-
rameters.
Parameter Description
Diameter [µm] Changes the diameter of the pinhole
slider
X Position [Vis] Changes the X position of the excitation beam of all vis lasers in rela-
tion to the pinhole
Y Position [Vis] Changes the Y position of the excitation beam of all vis lasers in rela-
tion to the pinhole
X Position [Invis] Changes the X position of the excitation beam of all invis lasers in re-
lation to the pinhole
Y Position [Invis] Changes the Y position of the excitation beam of all invis lasers in re-
lation to the pinhole
Store Current Pos Stores the current position as the default position for the given lasers
and main dichroic filter
Parameter Description
Moved to Stored Resets the pinhole position to the stored value
Pos
Multiphoton lasers are steered into the scan head of the LSM via a free beam coupling. The com-
ponents of the coupling kit ensure that the laser beam is enclosed within metal pipes. The cou-
pling components include a periscope which lifts the free beam up to the height it needs to have
to hit the coupling port of the scan head. The positioning of a freely coupled laser beam is subject
to temperature changes, potentially mechanical influences and the laser itself. When tuning the
laser, it very slightly changes its position. When working with two laser beams the precise overlay
of the two beams can be critical for the experiment. Hence the “Adjust Periscopes” tool provides
access to the motorized steering mirrors within the periscope to adjust for possible mismatches
between the two laser beams.
The periscope contains two sets of mirrors for each beam. One set has more influence onto the
homogeneity of the illumination, the other set is rather adjusted for a precise overlay between the
two beams. For a good coupling both the lateral position as well es the angle of the beam needs
to be adjusted to have the laser travel along the center of the optical axis.
The best reference to achieve this is an image of a structured and fluorescent sample taken with
the 488 nm laser. Such a sample can also be purchased at ZEISS.
The following controls are available:
Parameter Description
Reference Image A drop-down list opens listing all open images with the last one ac-
quired on top of the list. The selected image appears as overlay image
in the continuous image tab.
Opacity Opacity relates to the reference image display in the continuous im-
age tab. 100% does only show the reference image, 0% does only
show the continuously acquired image.
On Switches the overlay image on/off (set opacity/0% opacity) for quick
cross check on overlaid structures.
NLO 690-1300 nm Click to select the periscope mirrors for the tuneable line of the Multi-
photon laser.
NLO 1045 nm Click to select the periscope mirrors for the fixed line of the Multipho-
ton laser.
Arrow button for Depending on the selection of the laser line, the arrows move the
various directions beam and hence the continuously acquired image taken with the NLO
laser in the indicated direction. The arrows allow fine and course
movement steps. The number of steps is displayed next to the arrow
buttons.
Move to zero A current absolute position of each mirror pair can be defined as posi-
tion zero. This resets the number of moved steps to zero.
Set position zero Moves the mirrors back to the zero position. The position of the mir-
rors present at each start of the system is taken as initial zero.
Prerequisite ü Sample with homogeneous fluorescence is placed in the sample holder (the samples fluores-
cence must be excitable by any NLO line).
ü The preferred objective for the alignment for Axio Observer: 10x; for Axio Examiner: 20 x 1.0 .
ü The sample is in the field of view and focused.
ü The tuneable Multiphoton laser line is tuned to the desired wavelength.
ü The acquisition parameters are set to have the maximum field of view and imaging speed of 9
or 10 to quickly see the changes in the image when clicking the alignment arrows.
ü Imaging Setup is configured using one internal detection channel at relevant spectral range
with open pinhole.
ü Multiphoton laser is activated for the single channel acquisition track.
1. Open Tools > System Maintenance and Calibration.
2. Select the tool Adjust Periscopes.
3. Select laser line of which the periscope mirrors should be controlled to check/align for ho-
mogeneous illumination.
4. Start a Continuous scan.
5. Set laser power and detector gain for optimal imaging with just the first pixels in saturation
(check with range indicator LUT or the rainbow LUT).
6. In case the brightest area is not centered use arrow keys to center the brightest area.
7. Use also a profile line for better visualization of the intensity distribution.
à The motor steps of each mirror are counted to easily move back to the original value.
8. Stop continuous scan when illumination is optimized.
9. The final mirror positions can be set as new zero.
à This position will be constant unless the mirrors are moved again.
Prerequisite ü Sample with homogeneous fluorescence and structures with high contrast is placed in the
sample holder (the samples fluorescence must be excitable by any NLO line).
ü Chose the objective relevant for the application.
ü Higher NA allows for higher precision when checking and correcting the overlay.
ü The sample with its structured part is in the field of view and focused.
ü A single snap image using preferably the 488 nm laser is acquired as reference image with
good visibility of the structures.
ü The tuneable NLO laser line is tuned to the desired wavelength.
ü Imaging Setup is configured using one internal detection channel at relevant spectral range
with open pinhole.
ü The acquisition zoom and the stage position are kept constant and are identical to the acqui-
sition of the reference image.
1. Open Tools > System Maintenance and Calibration.
2. Select the tool Adjust Periscopes.
3. Select the just acquired single image as reference image.
4. Select laser line of which the periscope mirrors should be controlled to check/align for im-
age overlay.
5. Activate this laser for acquisition in the Channels tool.
6. Start a Continuous scan.
Info
If the alignment of the Multiphoton laser is off and only part of the beam or no beam is visible
in the objective plane do not attempt to align using this tool. The tool is designed to correct
for minor mismatches occurring when the laser is tuned or in case of minor temperature varia-
tions. Adjustments should only be done when the system itself has been warmed up for at
least one hour. It is assumed that the system is set up in a temperature-controlled room with-
out major air flow as is described in the system setup requirements.
10.7.7.4.1 Introduction
The tool provides controls for the automatic alignment of the Airyscan detector during system op-
eration. The status of the current adjustment is indicated in the status bar of ZEN whenever the
Airyscan detector is in use in Airysan SR or Airyscan Multiplex tracks.
The tool can be opened via the System Maintenance and Calibration dialog located in the
Tools menu or by clicking the arrow next to the icon of the detector in the status bar.
4
5
- No signal available
- Alignment ongoing
- Manual The adjustment can be done using the sliders under Fiber posi-
Mode: tion.
3 Fiber position Use slider to change the adjustment of the fiber in x and y for
manual positioning. The slider can only be operated in Manual
Mode. Activate Store Invis correction position automatically
to keep this position until the adjustment is active again during
continuos scan in Airyscan tracks.
4 Store Current Stores the current positions of x and y for further reference.
Pos
5 Move to Moves both sliders to the stored position. Going to the stored
Stored Pos position is a good start when manually realigning the detector.
6 Detector View Shows the current intensity distribution of the emission signal
over the detector elements.
10.7.7.4.2 Adjusting the Airyscan Detector Automatically via the Status Bar
1. To access the detector adjustment, click on the detector symbol at the Status bar. Depend-
ing on the selected Airyscan mode, the illumination pattern varies in geometry. A correct
adjustment is always symmetrical and centered.
The pattern of the active Airyscan detection elements might vary depending on the chosen
Airyscan detection mode.
10.7.7.4.3 Adjusting the Airyscan Detector Automatically via System Maintenance and Calibration
Info
When using 405nm laser (or a NLO laser line) in one track and any other laser in the second
track, use continuous mode with both tracks active for Airyscan detector adjustment. The
alignment for 405nm laser depends on the alignment of the other lasers first. If the tracks are
adjusted individually the alignment is mutually overwritten and cannot work properly. Activate
the checkbox Store Invis correction position automatically to keep this position until the
adjustment is active again during continuos scan in Airyscan tracks.
Info
In case the experiment requires image acquisition in multi-track mode with frame wise switch-
ing between the tracks and one track using the 405 nm laser for excitation, the following set-
ting needs to be defined to achieve an automatic adjustment of the Airyscan detector during
Continuous mode: Keep the main dichroic for the 405 laser identical between all tracks
whether the tracks work with the 405 laser or not. Otherwise the switching of the MBS be-
tween the tracks does not allow to finalize the adjustment and the system stops with an error
message. Do also check the suggestions for track setups which are made by Smart Setup.
Info
Airyscan as detector for LSM Confocal track
When Airyscan is used as detector for LSM confocal track, alignment is not needed. The align-
ment is set to the stored position.
Info
Use manual mode
Use manual mode if the alignment takes to long or does not work. Move slider to the stored
position first for a good start.
1. Click on the detector symbol at the Status bar. Depending on the selected Airyscan mode,
the illumination pattern varies in geometry. A correct adjustment is always symmetrical and
centered.
The pattern of the active Airyscan detection elements might vary depending on the chosen
Airyscan detection mode.
The following function adjusts the readback signals of the scanner when a spline scan is per-
formed (see Parameters for LSM Imaging Modes [} 875])
To adjust the readback signals proceed as follows:
6. If this readback signal does not match with the drawn line, use the various sliders (Factor
and Position for X and Y) to achieve maximum overlay between the two lines.
7. For the spline scan itself it might then be necessary to reduce the scan speed and/or reduce
the bend of the spline curve to achieve optimal spline scan positioning.
The adjustment of the readback signal should be checked now and again if the overlay of the sig-
nal with the spline graphics does no longer sufficiently match.
The LSM is equipped with a variety of diode and solid state lasers. The lasers can be controlled
(switched from Standby mode to On) vie ZEN. The tool displays only lasers which are available
with the system. (For LSM 980 only.)
1 2
2 Power column
Contains the On/Off power control buttons one for each laser line. For Multiphoton
lasers the ON/OFF effects both, the tunable and the fixed line.
The Lattice SIM family consists of Lattice SIM 3 (LSIM 3), Lattice SIM 5 (LSIM 5) and ELYRA 7 with
Lattice SIM (ELYRA 7). All Lattice SIM 3/5 and ELYRA 7 illumination/detection units are attached to
the rear port of the microscope. The ELYRA 7 module is designed for performing single molecule
localization microscopy (SMLM) and/or Lattice structured illumination microscopy (Lattice SIM) as
well as SIM Apotome. The highly integrated system design allows combining two super resolution
illumination modes, SMLM and Lattice SIM in one module. The Lattice SIM 5 module focuses on
Lattice SIM technology with high mag/high NA objective and SIM Apotome. In contrast, Lattice
SIM 3 features SIM Apotome with capabilities of Lattice SIM with medium mag/med. NA objec-
tives.
The available spectral range extends from the V to the VIS region. For visible light (VIS), you can
select from up to three lasers with wavelengths of 642nm, 561nm and 488nm. For violet light (V),
a 405nm laser is available. Coupling of the laser light is through polarization-preserving single-
mode optical fibers. One variable beam collimator provides adaptation of the respective laser
wavelength to the used objective. On ELYRA 7, acousto-optical tunable filters (AOTF) adjust the
necessary brightness for all four laser lines within microseconds. The 405nm laser is additionally
equipped with neutral density filters that allow tuning the laser power further down. In contrast,
LSIM 3 and LSIM 5 use directly modulated diode lasers.
A point object imaged by an objective lens will always be a blurred spot, the so-called point
spread function (PSF), due to the diffraction of light. Therefore, two points cannot come closer
than a certain limit in order to still be resolved. This minimal distance in object space corresponds
to a maximal cut-off frequency in the frequency space that can be transmitted through the objec-
tive lens.
The transmittable frequencies represent the optical transfer function (OTF), which is the Fourier
transform of the PSF. As a rule of thumb, the resolution of a far field light microscope in the lat-
eral direction is approximately half the wavelength, whereas it is three fold worse in the axial di-
rection. Hence, for a light microscope lateral and axial resolutions are approximately 240nm and
600nm, respectively. In order to enhance resolution, the PSF has to be narrowed, which is synony-
mous to the OTF to be expanded. This exactly is accomplished by structured illumination mi-
croscopy (SIM) and photo activated localization microscopy (SMLM).
In SIM, a sinusoidal pattern, e.g. a 1D line grid pattern (Stripe SIM), or a 2D dot pattern (Lattice
SIM) with a defined periodicity, is positioned in the excitation path in a plane that is conjugated to
the image plane. Hence the pattern is projected onto the image and the excitation intensity is
modulated along the pattern. This results in a modulation of the fluorescence as well. The highest
modulation contrast is obtained in the focal plane. The contrast gets weaker with distance to the
focal plane, nevertheless depth discrimination in the axial direction is possible by the so-called Tal-
bot effect. The grid constant is ideally the cut off frequency of the system. The interferences of
the diffraction orders generated through the grid are used for the lateral and axial structuring of
the light.
The interference of the structured light with object structures requires that at least part of the
light is coherent and leads to so called Moiré fringes whose pattern has a longer periodicity and
hence a lower space frequency compared to the object. Each sample or object structure can be
regarded as a superimposition of many grids. Due to the structured illumination the object fre-
quencies will be shifted to lower frequencies due to the Moiré effect. Therefore high frequencies
that normally cannot be collected by the system can be transmitted due to their shift into lower
frequencies. Due to the diffraction nature of light, the modulation grid frequency cannot have a
periodicity that is smaller than half the wavelength, so resolution enhancement can be a maxi-
mum of two fold in each lateral and axial direction. Due to the frequency shift the image frequen-
cies are composed of non-shifted and shifted object frequencies and hence their contribution has
to be determined.
A coherent image generated with a sinusoidal line grid pattern has generally three different orders
created by interference: the 0th order from the 0th order non- diffracted beam, the 1st order cre-
ated by interferences of the ±1st order diffraction beams with the 0th order non diffracted beam
(frequencies shifted by half the modulation frequency), and the 2nd order created by the interfer-
ence between the +1st order with the – 1st order diffracted beam (frequencies shifted by the
modulation frequency). It is the 2nd order that contains the high frequency information for resolu-
tion enhancement in X and Y. The 1st order contains information for sectioning and better z-reso-
lution. The pattern generated with a linear grid is created by a three beam interference. In the
case of a 2D Lattice the pattern is generated by a 5 beam interference given rise to 7 orders.
To shift the frequencies to their correct location, a linear equation with five or thirteen unknowns
(2 x n-1, n=number of different orders, which is 3 in the case of a line grid and 7 in the case of a
2D Lattice) has to be solved. In the case of the 1 D line pattern, this can be accomplished pixel
wise by recording 5 images with a phase shifted modulation pattern. Since resolution is obtained
only in the orientation of the grid pattern, the grid has to be rotated to obtain a nearly uniform
resolution in all directions. In general one uses 3 rotations, which have proved to be sufficient for
a near isotropic resolution. Hence, the reconstruction of a SIM image requires a minimum of 3 x 5
= 15 images for a line grid pattern. Finally a back transformation from the frequency to the real
space generates the high resolution image. In Lattice SIM we have seven orders. Hence 13 (2 x 7 -
1) phase shifted images are minimally required. Since the Lattice SIM pattern is symmetrical, no
rotation is necessary.
In SMLM the sample is illuminated in a regime that statistically activates only one fluorescent mol-
ecule per PSF. Hence, the center of mass of the PSF can be determined, which can be done more
precisely than the PSF itself. The localization precision depends solely on the number of collected
photons.
Since only a few molecules are active at a time, the procedure of converting molecules to the on-
state followed by deactivation to convert them to the off-state has to be repeated many times in
order to detect all labeled molecules. The aim in SMLM is to activate a few molecules at a time,
calculate their center of mass, plot them in a new coordinate system and deactivate them quickly
(preferentially after one to three frames) and repeat the process until statistically all molecules
have been activated. So a SMLM experiment easily needs 10 000 frames and more. The resulting
image is a plot of all molecules localizations with a certain precision depending on the photon
numbers obtained from the molecule.
Laser light is focused into the back focal plane of the objective to obtain widefield illumination in
the specimen. Dependent on the angle of a TIRF mirror, either Epi-, HILO- or TIRF-illumination is
achieved. Light emitted from the specimen is directed back via a reflector cube that separates the
emission from the excitation light. The fluorescence signals are directed to individual CMOS or sC-
MOS cameras.
1. Start the software as described in the general chapter, see Starting Software [} 22].
2. On the profile selection window, click ZEN Lattice SIM.
à The software starts and a dialog for stage/focus calibration is displayed.
3. Click Calibrate Now to start the calibration. This step is needed to obtain absolute stage
coordinates when the hardware was newly started. You can skip this step if the hardware
was not turned off after the last calibration.
à Stage/focus is calibrated and ZEN opens.
4. On the top of the Acquisition tab, go to the Sample Carrier section and click Select. Al-
ternatively, click Detect for an automatic sample carrier detection.
à A dialog to select a sample carrier template opens.
5. Select the sample carrier template that is used for the experiment and click OK.
à You have selected the sample carrier for your experiment.
6. On the Acquisition tab, open the Imaging Setup tool. Alternatively, you can also open
the Channels tool.
à The track selection is displayed.
7. Click +Lattice SIM to add a Lattice SIM track, + Apotome to add a SIM Apotome track, +
Laser Widefield to add a laser widefield track used for Localization Microscopy (SMLM), or
+ Widefield to add a widefield track using a thermal or LED light source.
You have started and prepared the software for Lattice SIM, SIM Apotome, laser widefield or
widefield, respectively . You can now set up your experiment.
ZEN offers the possibility to use SIM Processing in Direct Processing. For general information
about the Direct Processing functionality and the setup, refer to Direct Processing [} 219]. In
combination with SIM Processing, you have the possibility to either adjust the parameters of the
functions in the tool, or you can take the parameter settings from an already processed image,
which serves as reference image.
Prerequisite ü If you are using Direct Processing on different computers, you have connected acquisition and
processing computer, see Connecting Acquisition Computer and Processing Computer
[} 222].
ü To ensure that the processing computer reads incoming files and starts the processing, you
have clicked Start Receiving in the Direct Processing tool on the Applications tab. This is
usually active by default.
ü On the Acquisition tab, Direct Processing is activated. This activates the Auto Save tool as
well.
ü Depending on your settings, you have defined the folder where the acquired images are
stored in the Direct Processing or the Auto Save tool. Use a folder to which the processing
computer has access. For information about sharing a folder, see Sharing a Folder for Direct
Processing [} 238].
ü On the Acquisition tab, you have set up your experiment for image acquisition, with either a
Lattice SIM track, or a SIM Apotome track.
ü If you want to use the parameter settings of a reference image that has already been pro-
cessed with SIM Processing, you need to have this reference image open in ZEN.
1. On the Acquisition tab, open the Direct Processing tool.
à The parameters are displayed. In the processing pipeline, the first block is selected auto-
matically. With a configured SIM acquisition track, SIM Processing is preselected per
default.
2. If the function is not preselected, go to the Processing Function dropdown list and select
SIM Processing.
à The parameters of the function are displayed. Note that not all parameters are available
in Direct Processing (e.g. Experimental PSF and Detrend are not possible). Additionally
the parameter for Histogram is preset to Scale to Raw, Baseline is set to Cut, and
both parameters cannot be changed.
3. Set all the parameters of the function for your experiment. For detailed information about
the parameters, see SIM Processing [} 1305].
à Your parameters are set for SIM Processing.
4. If you want to use the parameter setting from a reference image, activate Use Reference
Image and then click Select Reference Image.
à A window opens which displays all images open in ZEN.
5. Select your reference image.
à You have selected a reference image for processing. All parameters are set to the values
of the reference image and cannot be edited.
6. If you want to use another processing function after finishing SIM Processing, click Add
Function.
à A new container is added in the processing pipeline.
7. Select the new container, select the respective processing function from the dropdown,
and set the parameters for the function.
à You have defined a second processing that is executed after SIM Processing is finished.
8. Click Start Experiment to run the experiment. Note: You can pause the processing. If you
stop the experiment, requests that have been sent earlier by the acquisition computer are
not processed. However, already processed images will be retained.
à The images are stored in the folder you have defined in the Auto Save or Direct Pro-
cessing tool. When you abort the acquisition, the remote processing will not take place.
In case you have set up several processing functions, only the acquired image and the fi-
nal output image are stored.
à The processing computer reads incoming files and starts the processing. The path to the
selected folder, the currently processed image as well as the images to be processed are
displayed in the Direct Processing tool. The processed image is saved to the same
folder specified in the Direct Processing tool. If the image name already exists in this
folder, the new file is saved under a new name <oldName>-02.czi.
9. To cancel the processing on the processing computer, on the Applications tab, in the Di-
rect Processing tool, click Cancel Processing.
Once processing is finished, you are notified on the acquisition PC and can open and view the ac-
quired image as well as the processed image. This should be done on the processing computer, so
that you can immediately start a new experiment on the acquisition computer. However, you can
also automatically open the processed image on the acquisition PC with the respective setting in
the Direct Processing tool on the Acquisition tab.
Information about Direct Processing (e.g. the duration) is available on the Info view tab of the
processed image.
Info
SIM Processing in Direct Processing
When you are using SIM Processing in Direct Processing, not all the parameters are available
and some are limited or preset to a specific value. For information on the use in Direct Process-
ing, see Using SIM Processing in Direct Processing [} 1304].
This method allows you to process a set of structured images acquired in SIM mode on ELYRA.
For datasets that additionally contain other types of image data ("mixed mode" files), this function
selectively processes the SIM data.
Parameter Description
Settings Enables you to manage settings, see General Settings [} 83].
Input Data Info Displays the image mode for the loaded image.
Time Range Selects the first and last image of a time series for processing. Default
is the full range.
Z-Range Selects the first and last plane of a z-stack for processing. Default is
the full range.
Burst Mode Switches between Block and Burst mode processing of 2D time series.
By default it is deactivated.
Activated: Uses Burst mode processing and activates sliding window
computation of a 2D time series.
Deactivated: Uses Block mode processing. Blocks of the defined
number of phases are processed.
Adjust per Channel Only visible if your image has more than one channel.
Activated: Displays all channels of a multi channel image and all set-
tings are applied to the selected channel.
Deactivated: Settings are applied to all channels.
Processing Dimen- Selects how data should be processed (best option is selected based
sion on dimension of dataset).
Result Sampling Defines the sampling factor for the pixel size of the output image. By
default it is set to 2.
– 2 Halves the pixel size of raw image in the output image (Nyquist sam-
pling for max. SIM resolution).
Parameter Description
– 3 Splits the pixel size of raw image into thirds in the output image.
– 4 Quarters the pixel size of raw image in the output image (Nyquist
sampling for max. SIM² resolution).
– 1 Maintains pixel size of raw image, divides original pixel size in 1x1.
– 2 Halves the pixel size of raw image, divides original pixel size in 2x2.
– 3 Splits the pixel size of raw image into thirds, divides original pixel size
in 3x3.
– 4 Quarters the pixel size of raw image, divides original pixel size in 4x4.
– Widefield Selects widefield processing, i.e. averaging of phases and displays rel-
evant widefield processing parameters.
– Simple Sum Uses a simple order sum (all weights are 1).
– Total Varia- Uses total variation to keep surfaces as flat and edges as sharp as pos-
tion sible.
Parameter Description
– Tikhonov Uses the Tikhonov Miller method for smoothing of objects.
Miller
Regularization Sets the strength of the regularization weight with the slider (only
Weight meaningful if Total Variation or Tikhonov Miller is selected; default
is zero).
– Gauss Applies a Gauss filter to the input image. A Gaussian filter acts as a
low pass filter for reducing noise (high frequency components) and
blurring regions of an image. The filter is implemented as an odd
sized Symmetric Kernel (DIP version of a Matrix) which is passed
through each pixel of the Region of Interest to get the desired effect.
– Median Applies a median filter to the input image. The principle of the median
filter is to replace the gray level of each pixel by the median of the
gray levels in a neighborhood of the pixels, instead of using the aver-
age operation. For median filtering, we specify the kernel size, list the
pixel values, covered by the kernel, and determine the median level.
Histogram Defines how the image is scaled according to the gray values.
– Scale to Min/ Scales the histogram rang to the maximum dynamic range.
Max
– Scale to Raw Keeps the histogram range from the raw image.
– Shifted Offset is applied with negative values kept, so that the minimal (nega-
tive) intensity becomes zero intensity.
Experimental PSF Activated: Allows you to attach an experimental PSF which is used
for processing.
Deactivated: Uses theoretical PSF for processing.
See also
2 General Settings [} 83]
Parameter Description
Settings Enables you to save and reload the adjusted settings, see also General
Settings [} 83].
Feature Size (px) Defines the peak mask size (radius) in pixels for a detected peak inten-
sity that will be used to cut out the peak and put the peak into a new
coordinate system.
Peak Quality Defines the peak intensity to noise ratio, i.e. the threshold of the sig-
nal strength of an identified peak over background at which the peak
is regarded to be genuine.
– Mask Fit Uses a Gaussian mask with a defined PSF according to the set PSF
Half Width value to match the intensity distribution of a peak in x
and y as best as possible.
– Gauss Fit The intensity distributions of the identified peaks are fitted to a full 2D
Gaussian function. The fit in each direction considers the influence of
the other direction (x, y 2D Gauss).
– Experimental Allows you to attach a PSF via the Input tool to the loaded input im-
PSF age. The attached PSF will be used to fit the peak in x and y for a 2D
dataset and in x, y and z by simulation.
Overlapping Mole- Allows you to select how overlapping molecules should be treated.
cules
– Discard Only single emitters are considered and fitted, whereas all multi-emit-
ter events are discarded (single emitter setup).
– Ignore All single and multi-emitter events are considered and fitted as single
emitter events (ignore emitter setup).
– Account For Multi-emitter events are fitted considering the overlap (multi-emitter
setup).
– Maximum Selects the maximum number of molecules in the cluster that are used
Cluster Size for computation for Account For setup.
– PSF Half Sets the PSF width (1/2 FWHM) for Account For overlap setup.
Width
See also
10.9 Viluma 9
10.9.1 Controlling the Viluma 9 Light Source with the Microscope Tool or Imaging
Setup Tool
10.9.2 Controlling the Viluma 9 Light Source with the Channels Tool
11 Maintenance
11.1 Creating a Service Report
The service report contains log files from MTB and ZEN. If you want to create a Service Report,
the following steps are necessary.
2. Enter date, time, and an useful description of the issue occurred and the reason for creating
this service report.
3. To include additional files to the report, click Add.
à The Windows Explorer opens.
4. Open the folder with the file to be added.
5. Select the file.
6. Click on Open.
à The Windows Explorer closes. Location and file name of the added file are shown in the
display field.
7. To remove a file from the display field again, select it and click Remove.
8. If required, activate Open file location in Windows Explorer after report is created.
9. Click Create report.
The System information window appears, showing the storage progress.
If the window closes, you have successfully created the Service Report.
This process might take several minutes depending on the system, size of the logs and if the Win-
dows system information is collected too. The storage location of the Service Report is C:/Pro-
gramData/Carl Zeiss/Remote Service.
For some functionality, you additionally need the software Docker Desktop on your PC to be able
to use it. This includes the download and use of dedicated AI models (e.g. for instance segmenta-
tion) and the execution of arivis Cloud modules with the arivis Cloud (on-site) functionality. You
have to install Docker Desktop by yourself (it is not provided by ZEISS via the ZEISS Microscopy In-
staller) and register/sign in with an account. Note that Docker also requires a paid subscription for
commercial use, so you need to choose a correct license. For information on how to install Docker
Desktop, always refer to the latest documentation provided by Docker itself, see https://
docs.docker.com/desktop/install/windows-install/, or generally https://docs.docker.com/.
Docker Desktop has its own system requirements, which are detailed in their documentation
(https://docs.docker.com/desktop/install/windows-install/). Always refer to the current information
provided by Docker. As of July 2024 there are the following system requirements:
§ Windows 11 64-bit: Home or Pro version 21H2 or higher, or Enterprise or Education version
21H2 or higher.
§ Recommended Windows 10 64-bit versions: Home or Pro 22H2 (build 19045) or higher, or
Enterprise or Education 22H2 (build 19045) or higher (Download: https://www.docker.com/
products/docker-desktop).
§ Minimum required Windows 10 64-bit versions: Home or Pro 21H2 (build 19044) or higher,
or Enterprise or Education 21H2 (build 19044) or higher.
§ For Windows 10 Enterprise LTSC 2019 (version 1809) a docker version lower than 4.24.2 is
required and can be downloaded from here: https://docs.docker.com/desktop/release-notes/
§ BIOS-level hardware virtualization support must be enabled, see also https://docs.docker.com/
desktop/troubleshoot/topics/#virtualization.
§ 64 bit processor with Second Level Address Translation (SLAT)
§ 4GB RAM. For the use of AI models the recommended hardware configuration is 8GB GPU
and 64GB RAM. Only NVIDIA GPUs and the CPU are supported.
§ The WSL 2 feature on Windows has to be installed and enabled, see also https://
docs.docker.com/desktop/windows/wsl/.
§ WSL version 1.1.3.0 or later.
§ If your version of Windows does not support the WSL2 feature, Hyper-V and Containers Win-
dows Features must be enabled.
If you cannot use WSL2, the resources allocated to Docker are not automatically managed by
Windows and might not be big enough in certain instances. In such a case, you can also adapt
the resource allocation manually.
Additionally, Docker Desktop has experimental features that are activated by default, which can
lead to an increased resource consumption (e.g. disk space). Such features can also be deacti-
vated manually.
If you work with Docker on a system with multiple Windows users, always sign-out from your
Windows account to initiate a shutdown of Docker when you have finished your work. Otherwise
the other users may have problems to use Docker with ZEN core.
Hardware Recommendations:
See also
2 arivis Cloud (on-site) [} 206]
2 Downloading AI Models [} 71]
2 Importing AI Models [} 72]
If you cannot use WSL2, the resources allocated to docker are not managed by Windows and
might not be big enough in certain instances. In such a case, you can adapt the resources manu-
ally in Docker.
12 FAQ
12.1 What can I do If my image is too dark?
§ in the Locate tab click on the Set Exposure button. This will calculated the correct exposure
time automatically.
§ in the Camera tool manually adjust the Time slider until you achieve the desired result.
§ in the General View Options > Display tab adjust the display curve, see chapter Adjusting
Live Image Settings [} 35].
To perform an automatic or manual white balance you must use a color camera. In the Left Tool
Area > Locate tab > Camera tool the White Balance section appears. There you can perform
a white balance with one of these methods:
Auto Method:
1. Move the sample out of the Live window’s field of view, so that you only see the back-
ground (essentially the light source).
2. Click on the Auto button.
3. The white balance will be calculated automatically. Afterwards move your specimen back
into the field of view.
Interactive/Pick… Method:
1. Click on the Pick… button.
2. Click on a white area of the Live window which should be represented as white.
This area will be used as a reference for the white balance.
3200K Method:
Use this method if working with a halogen bulb.
1. If available set the light source to 3200K by pressing the 3200K button located on the
body of the microscope.
2. Click on the 3200K button.
This method is also largely depending on the quality/age of your bulb. If the color rendition is not
as desired, try the Auto or the Interactive/Pick_ methods above.
5500K Method:
Use this method if working with a LED.
12.3 How can it be that my image has dust or a shadow, although my specimen is
clean?
If the dust is not on your specimen, then the best method is to clean the optical elements that lay
in the imaging pathway of your microscope. However if that poses a problem, alternatively, you
can perform a Shading Correction as shown below. This solution has some limitations, especially
if the dust is very dark or thick.
1. Move your sample out of the Field of View until you see nothing but the light source/dust.
2. In the Camera tool, in the Post Processing section, click on the Channel Specific button.
3. Move your sample back into the Field of View.
12.4 Why my image seems to look that something have burned in? (i.e. a shadow of a
previous specimen?)
Check that The Shading Correction of the previous experiment is not already adjusted.
1. Move your sample out of the Field of View until you see nothing but the light source.
2. In the Camera tool, in the Post Processing section, click on the Channel Specific button.
3. Move your sample back into the Field of View.
4. Perform a White Balance, see chapter How can I balance my images color? [} 1316]
See also
2 Why my live image shows extreme colors in comparison to what I see in the eyepieces?
[} 1318]
12.6 What can I do if my live image is of a low quality and looks pixelated?
1. In the Camera tool > Mode section in the Live Speed dropdown list select the entry
Slow.
2. Right-click on the live image and select the Fit to View entry.
3. Optionally, you can also , in the Dimensions tab, activate the Interpolation checkbox.
Solution A
The live speed of your image is possibly set too slow. Increase the Live Speed.
1. In the Camera tool in the Mode section, select a faster speed from the Live Speed drop-
down menu. There are at most three choices, depending on the camera: Slow, Medium,
Fast.
Solution B
The exposure time is possibly set to high, respectively improper. Optimize your settings in the
Camera tool, in the Exposure time section.
Check whether the checkbox Range Indicator is activated. If this is the case, the display switches
to the Single Channel mode. The channel will be displayed monochrome. Simultaneously you
see areas where the camera sensor is saturated, shown in red. Areas in which the pixel values = 0,
are shown in blue. If this is not needed anymore deactivate the checkbox.
12.9 What can I do if my live image is still black or white after setting the exposure?
Check to see that your display curve is not set all the way to the left/right. Try to reset the display
curve by clicking in the Display tab on the Reset button to achieve the default setting.
12.10 Why my live image shows extreme colors in comparison to what I see in the
eyepieces?
The reason could be, that your display curve is not adjusted.
See also
2 Why is my image color not the same that I see through the eye pieces? [} 1318]
12.11 Why is my image resolution lower than the given camera specification?
Refocus the specimen on the microscope. You may activate the Focus Bar as an additional aid.
12.13 Why is my image color not the same that I see through the eye pieces?
This is largely dependent on the color of your light source. The following instruction assumes that
your light source is set to white.
1. In the Camera tool, in the Settings section click on the Default button, to set the camera
back to factory default.
2. Click on the Set Exposure button.
3. In the Display tab, click the 0.45 button. If you do not see this button, activate the Show
All mode.
See also
2 How can I balance my images color? [} 1316]
In the first instance test the SWAF with the default settings and select an appropriate channel to
be the reference channel – remember that you can uncheck the reference channel in the Channels
tool i.e. so that it will not be imaged, but will still be used for a focus strategy or SWAF. Consider
using a transmitted light channel if possible as this will not be subject to bleaching. Remember
that the sample plane that returns a maxima from the SWAF run is not necessarily the same plane
as that which you’re interested in imaging.
If this is the case, then you need to capture z-stacks with a suitable range or make use of the fo-
cus offset function that allows each channel to be offset by a relative value in z to the reference
channel. Typically, once you have established that a maxima can be reliably found with the default
SWAF parameters you might want to consider optimising the SWAF run by modifying the parame-
ters to reduce the time required to complete it or to reduce sample exposure (phototoxic stress
levels).
12.15 The SWAF returns an error message. What does this mean and how can I correct
this?
The most typical error encountered is that the SWAF run does not find (return) a clear maxima
and hence fails (hits so called search boundary). This might be because the step size set is too
course (i.e. the maxima is missed occasionally) or the search range does not contain a clear sharp-
ness maxima e.g. due to lower contrast/ intensity (signal to noise) or a lack of signal.
This can happen, for example, at the 1st tile (upper left) of a tile region which is empty such as is
often that case in the corners of such tile regions. In this case it is possible to modify the SWAF fo-
cus strategy in the Tile Region loop such that the SWAF will be executed at the centre of the tile
region. In ZEN 2.3 and higher if a SWAF failure occurs at a given loop entity then a fall-back ap-
plies such that the initial or last known z value will be used.
12.16 SWAF returns a failure after reaching a search boundary – what`s wrong?
If you are able to optimize the SWAF and the reference channel is typically robust for this then the
solution might require that you look at the overall stability of the focus due to environmental vari-
ables such as vibrational isolation, temperature flux, evaporation of fluid from sample vessels, or
poor lighting conditions (extraneous light).
Generally, if these are optimized then better overall results can be expected, if this is not the case
then the fluctuations they cause might be the source of such reliability problems when detecting
the sharpness maxima. Other things to consider are optical disturbances such as loss or poor im-
mersion of the objective (bubbles), or artefacts caused by sample characteristics (for example (cell)
debris moving across the FOV during the SWAF run). Perhaps it is possible to use a fiducial marker
if the sample allows this (see Auto Focus ROI above).
This depends entirely on the sample in question and manner in which it is imaged (e.g. reflected
or transmitted light). However, if we are to generalize then the answer might simply be given as
follows. A Full search will return the global sharpness maxima from the entire search range de-
fined in the SWAF tool. The search always runs in one direction i.e. the z-movement of the actua-
tor is unidirectional always moving against gravity.
Smart on the other hand is intended for quickly* detecting a local sharpness maxima by allowing
a bidirectional search pattern. Thus, the full setting is typically a more extensive search that takes
longer to accomplish. However, the smart setting may be suitable and saves a great deal of time
and reduces sample light exposure.
*Typically, it will be the case that Smart is faster, but under some circumstances it might be
slower. This will likely be the case when the maximum is far away from the starting position.
12.18 Can I change camera parameters in a SWAF run e.g. exposure time or binning?
In the implementation of SWAF in ZEN blue version 2.1 and earlier it was not possible to change
the acquisition parameters directly. The exposure time setting using by the SWAF is based on the
exposure time setting of the reference channel. Thus, longer exposure times of the reference
channel increase the exposure time employed by the SWAF up to a maximum of ca. 100 ms.
Thus, weak signals in the reference channel may cause an increased likely hood that no clear max-
imum will be found. Binning settings of a camera used by the reference channel are not taken
into account and SWAF always uses 1x1 binning – this may cause issues with cameras processing
smaller pixels (e.g .new AxioCam models).
In ZEN Blue version 2.3 and higher the SWAF has be further adapted to address the described lim-
itations. Thus, exposure times defined for the Reference channel are used even when these ex-
ceed 100 ms. Settings that apply to all the channels defined in the acquisition mode tool still ap-
ply to the SWAF i.e. an independent binning setting for the reference channel is not yet possible.
Thus, please take this into account when setting up your SWAF parameters. In addition, the SWAF
run has been streamlined and can make use of triggered acquisition when supported by the cam-
era making the SWAF run ca. a factor of two faster than in previous versions.
Glossary
Airyscan Principle ApoTome Bleaching correction
A classic confocal microscope illumi- The main reason for residual stripe
nates one spot on your sample to de- artifacts in ApoTome images is the
tect the emitted fluorescence signal. fact, that acquiring at least 3 grid im-
Out-of-focus emission light is rejected ages for one resulting processed im-
at a pinhole, the size of which deter- age leads to bleaching of the fluores-
mines how much of the airy pattern cent dyes. This bleaching leads to
reaches the detector. You can in- brightness differences in the raw im-
crease the resolution by making the ages and to artifacts when processing
pinhole smaller, but signal-to-noise the data uncorrected. The principle
drops significantly since less valuable for correcting bleaching in widefield
emission light is passing through. data is the fact, that no matter, how
With Airyscan ZEISS introduces a new far away from the focal plane the de-
concept. Instead of throwing light tector is placed, the sum intensity
away at the pinhole, a 32 channel emitted from the sample remains the
area detector collects all light of an same. This is also true for the grid im-
Airy pattern simultaneously. Each de- ages. This fact is being used for the
tector element functions as a single, ApoTome. Two methods exist, both
very small pinhole. Knowing the of which are patented.
beampath and the spatial distribution
of each Airy pattern enables a very Binning
light efficient imaging: you can now Binning is understood to mean the
use all of the photons that your ob- combination of neighboring image el-
jective collected. ements (pixels) on the image sensor
itself, e.g., the CCD sensor in a digital
ApoTome - Global bleaching camera. Source: Wikipedia
correction
For each raw image the total (sum) Bleaching Correction
intensity is measured and a global de- The characteristics of a widefield flu-
cay curve determined. This decay fac- orescence microscope are based on
tor is used to correct the brightness the assumption that all Z-planes have
of all pixels in the raw image. This the same total brightness, irrespective
method is suitable for samples with of the focus position. Use is made of
only one fluorescent dye present. bleaching correction by applying a
correction factor to each Z-plane.
ApoTome - Local bleaching However, this assumption does not
correction apply to techniques that result in the
If more than one fluorescent dye is generation of optical sections, such
present in the sample, a single decay as confocal images.
curve cannot be used to correct
bleaching. This is the case for most Burst Mode
biological samples especially when Burst Mode is some kind of optimiza-
considering the contribution of aut- tion to enable the recording of the
ofluorescent substances to the total fastest framerates, which can be
signal detected. The solution is to use achieved by the used camera hard-
a local bleaching correction determin- ware. This mode requires some com-
ing a decay factor for each individual promises for the sake of speed: it
pixel position in each raw image supports only single channel time
which effectivel removes artifacts also lapse acquisition, an update of the
for complex dye combinations. image display is suppressed while
recording and the maximum time tion results from the pixel-by-pixel
lapse duration is depending from the comparison of intensities for each
size of available main memory (minus channel.
some space to breathe for the oper-
ating system). If a multi channel im- Constrained Iterative
age needs to be acquired, Burst The best image quality is achieved us-
Mode will be disabled and maximum ing the iterative maximum likelihood
frame rate can be slower then speci- algorithm (see Schaefer et al.: 'Gener-
fied in the camera hardware perfor- alized approach for accelerated maxi-
mance documentation. mum likelihood based image restora-
tion applied to three-dimensional flu-
Clipping planes orescence microscopy', J. of Mi-
The purpose of clipping planes is to croscopy, Vol. 204, Pt 2, November
cut open the calculated 3D image so 2001, pp. 99ff.). This algorithm is
that elements on the inside can be vi- able to calculate light from various
sualized. Clipping planes can cut the focal planes back to its place of ori-
volume in such a way that either the gin. Consequently, with this method
front, back or both sides of the vol- it is possible to derive the 3D struc-
ume data are no longer visible. In ad- ture from fluorescence images with
dition, the clipping plane itself can be the correct brightness distribution
given various textures. This is a very and to visualize optical sections. It is
important modeling option for ana- also possible for missing information
lyzing 3D data. to be partially restored from neigh-
boring voxels. The spatial resolution
Colocalization can be increased without artifacts up
Acquiring fluorescence images in sev- to a theoretical limit (one voxel). It is
eral channels makes it possible to vi- essential for z-stacks to have been ac-
sualize the relationship between bio- quired in accordance with Nyquist.
logical structures. A combined display Acquiring sufficient planes above and
of two channels in color overlay below the structure of interest is also
mode makes it easier to assert imperative for achieving good results.
whether the components are colocal- As this is a complex mathematical
ized, i.e. whether they are located at method, the calculation can take
the same position. Conventionally, longer, depending on the image size
two fluorescence channels are dis- and the PC being used.
played in the form of a color-coded
overlay. The most common form is Costes
the red/green overlay. Regions in Costes et al. (Biophysical Journal,
which both fluorescent dyes are 2004, vol. 86, pp 3993-4003) have
present at the same place are dis- published a statistical method with
played in yellow. It is not possible, the help of which an attempt is made
however, to make quantitative state- to determine an optimal colocaliza-
ments concerning the extent of colo- tion threshold automatically. This
calization on the basis of this display. takes place by initially maximizing the
At best, a qualitative statement is threshold for both channels and then
possible with regard to whether or gradually reducing it. With each step
not two dyes are colocalizing. The Pearson's Correlation Coefficient is
Colocalization module is able to fill determined for all pixels below the
this gap and presents the user with a set value. These steps are repeated
tool that enables colocalization to be until the Pearson value is minimized
determined quantitatively. Principle: It (ideally a value of 0 for perfectly colo-
is always the colocalization of two calizing channels). See the publication
channels that is analyzed. Colocaliza- for further details. This method has
been implemented in Colocalization.
Clicking on Auto initiates the above of the point spread function (PSF). If
iterative process, which, depending the PSF is known, it is possible to cor-
on the sample, can take several sec- rect the negative effects to a large
onds. The threshold now set corre- extent using deconvolution. This pro-
sponds to the confidence criterion duces a completely sharp image of
calculated. This method works very the object that is richer in contrast.
well with large, diffusely stained Deconvolution is usually performed
structures such as nucleoplasm or dif- on Z-stacks, i.e. it is used as a 3D
fuse cytoplasmic structures. Under method. However, it can also be used
certain circumstances it does not to a limited extent to improve 2D im-
function so well for small structures ages. A good review of deconvolu-
(e.g. nuclear speckles or vesicular tion can be found in Wallace et al.,
structures), particularly in the case of 2001: A Workingperson's guide to
widefield images, where the signal to deconvolution in light microscopy;
background ratio is not as good as it Biotechniques 31: 1076-1097.
is with methods that involve the gen-
eration of optical sections (e.g. LSM, Discrete Fourier Transform
TIRF or ApoTome). The Regions but- The Discrete Fourier Transform func-
ton becomes active as soon as a re- tionality is based on the publication:
gion is inserted into the scatter plot. "Multiple imaging axis microscopy
It remains active as long as regions improves resolution for thick-sample
are selected or moved there. Activat- applications", Jim Swoger, Jan
ing and deactivating the button Huisken, and Ernst H. K. Stelzer, OP-
makes it possible to switch between TICS LETTERS / Vol. 28, No. 18 / Sep-
threshold selection using the mouse tember 15, 2003
and the selection/moving of selected
regions in the scatter plot image. If Display characteristic curve
regions are defined in the scatter
plot, the corresponding data appear The display characteristic curve allows
in the table in addition to the overall you to define the range of the gray
image. value histogram of an image that you
want to display on the screen. The
limit on the left defines the gray value
Deconvolution
up to which all pixels are displayed as
Deconvolution is a method that is pure black (black value), while the
used to improve fluorescence images limit on the right defines the gray
in particular. Image information ac- value from which all pixels are dis-
quired using a microscope system can played as pure white (white value).
never fully reproduce the structures The curvature of the curve defines
of the actual object. This is because the so-called gamma value.
unavoidable distortions occur during
acquisition due to the optics and Drag&Drop
electronics. In addition, particularly in
the case of fluorescence microscopes Literally translates to "drag and
that do not offer any methods for drop". Does the moving of objects
generating optical sections, light (eg, files, icons, etc.) on the screen as
from areas of the object outside the from one folder to another. Clicking
objective's focal plane is also always the object with the left mouse but-
acquired. This covers the structures ton, holding down these, the object
that the user actually wants to see to moves with the mouse to the desired
a varying degree and therefore leads location.
to a reduction in the contrast and
consequently in the visible resolution.
These optoelectronic effects can be
described mathematically in the form
visualize and analyze speed and ac- based on the number of available
celeration of moving objects with a GPUs. The splitting of data depends
simple 2D representation. on the size of individual 3D subsets,
the available resources of the GPUs
Maximum mode and the selected method and its pa-
In the case of a maximum intensity rameters. The methods using multi-
projection, only the pixels with the GPU are the Regularized Inverse Fil-
highest intensity are displayed along ter, Fast Iterative and Constrained It-
the observation axis. This view is well erative, but only without depth vari-
suited to the two-dimensional display ance.
of three-dimensional images, e.g. in
publications, one reason being that a NDD (Non descanned detector)
maximum transparency effect is only A detector with the shortest possible
visible in this mode. light path for the emitted light, avoid-
ing the reducing signal loss. It also
Mixed mode has no pinhole and fewer optical ele-
In Mixed mode, a volume can be dis- ments in the path.
played in both Surface mode and
Transparency mode. In the case of Nearest Neighbor
multichannel images, for example, The Nearest Neighbor method uses
structures inside a cell, such as FISH the simplest and fastest algorithm
signals or nucleoli, can be displayed (Castleman, K.R., Digital Image Pro-
in Surface mode and the cytoplasm cessing, Prentice-Hall, 1979). Its func-
around these structures can be dis- tion is based on subtraction of the
played transparently in another chan- out-of-focus information in each
nel. This means that even highly com- plane of a stack, taking the neighbor-
plex spatial relationships can be ing sections above and below the
shown convincingly. corrected Z-plane into account. This
method is applied sequentially to
Motif buttons each plane of the entire 3D stack. It
With the Motif buttons you can opti- allows you to enhance contrast
mize image acquisition regarding par- quickly, even if image stacks have not
ticular requirements like speed or been put together optimally.
quality. All parameters e.g. camera
resolution or dynamic range in Acqui- Nyquist Criterion
sition Mode or Channels tool were The Nyquist criterion states that a sig-
set automatically. They will influence nal must be detected with at least
basically camera, detector and light- double precision in order to reliably
ning settings. acquire all the frequencies in the sig-
nal. In the case of images acquired
MTB (MicroToolBox) with coarser resolution, undesired ef-
Software module mediating between fects such as aliasing may otherwise
ZEN software and hardware (stand). result. For the deconvolution of mi-
The software runs as a service on the croscope images, this means, in prac-
user PC. tical terms, that images should be ac-
quired with a pixel resolution that is
Multi-GPU at least double the optical resolution,
both in the lateral and axial direction.
For Deconvolution in ZEN, the pro-
cessing of the image data can be dis-
tributed to multiple GPUs (Graphics
Processing Units). This multi-GPU
functionality is based on data split-
ting, i.e. input data is split into tiles
zoomed down versions of such large strategy is used, this determines and
images, the computer has to do a lot updates the Reference Z-Position dur-
of calculations for subsampling data. ing the experiment.
This is why it is helpful to create sub-
sampled version of the original im- Regularization
ages during acquisition, so that Working with real microscope images
zoomed down images can be quickly that are affected by noise leads to
displayed. Usually several zoom levels considerable difficulties with the
are calculated automatically. practical application of deconvolution
methods, which is why regularization
Raw Data Mode (e.g. according to Tikhonov-Miller-
The ApoTome combines the advan- Phillips) is essential. Regularization is
tages of widefield imaging systems a method that lessens the influence
with the advantages of optical sec- of noise by means of various penalty
tioning. Images acquired from the terms. Stronger regularization leads
Acquisition tab always contain all im- to weaker restoration and weaker
ages acquired from the grid. These regularization to stronger restoration,
grid images are also called phase- or although in this case noise is also in-
raw-images. This principle offers sev- tensified.
eral advantages: 1) all informations
acquired are kept and not discarded; Regularized Inverse Filter
2) the acquisition itself is not slowed The inverse filter is a genuine 3D
down by processing overhead; 3) you method and generally achieves better
get access to various correction results than the Nearest Neighbor al-
methods giving you flexibility in treat- gorithm. It essentially involves divid-
ing your sample in the right way after ing the Fourier transformation (FT) of
acquisition; 4) Phase (=grid-position) the volume by the FT of the PSF,
errors occuring during acquisition which can be performed very quickly.
such as caused by vibrations of the In the real space this corresponds to
microscope can be likely corrected deconvolution. In addition, a statisti-
using the phase correction option cal method (General Cross Validation
without having to redo the acquisi- – GCV) is applied, which determines
tion; 5) you can achieve a marked im- the noise component of the image
provement in resolution and contrast and automatically sets the restoration
by using the specially adapted Apo- strength to the optimum level in line
Tome deconvolution option bundled with this. This process is also known
with all systems; 6) the raw mode fa- as regularization. The method is very
cilitates easy analysis of images which well suited to the processing of sev-
show errors or artifacts in the sec- eral image stacks in order to preselect
tioned image which would otherwise images for the application of the iter-
remain obscure. ative 'high-end' method. Z-stacks
must, however, have been acquired
Reference Z-Position at the correct (Nyquist) distance. The
By default the current Z-position at additional acquisition of Z-planes
the time the experiment is started is above and below the structure of in-
set as the Reference Z-Position for ac- terest is recommended.
quisition. Z-stack experiments, for
which the center of the defined Z- Render Series
stack is set by default as the fixed To display a 3D volume on the
Reference Z-Position, form an excep- screen, each image must be recalcu-
tion to this. Offsets for channels and lated. This takes time and, in the case
Z-stacks shift acquisition in relation to of large images, cannot be done in-
the Reference Z-Position. If a focus teractively. You can, however, have a
samples, e.g. samples without colo- improve the brightness, contrast and
calization as a negative control and vibrancy for better visual quality and
samples with biologically relevant detail eligibility.
colocalization as a positive control.
Thresholds determined in this way Transparency mode
can, under certain circumstances, be In the Transparency mode, the struc-
transferred to the sample of interest. tures in the image are rendered in a
similar fashion as in the Volume
TIE mode. The key difference is an ap-
TIE stands for Transfer of Intensity plied edge enhancement filter to al-
Equation. It refers to a method, low more focus on relevant structures
where a quantitative phase image within the data while simultaneously
can be produced by combining the fading out homogenous and less im-
information of 2 out of focus images portant areas.
with one in-focus image in brightfield
illumination to produce images with a Volume mode
phase contrast or alternatively a DIC/ In the Volume mode, the structures
Nomarski like contrast. This requires in the image are rendered as three-di-
imaging under partially coherent illu- mensional objects and illuminated by
minations conditions by closing down means of a virtual light source. Addi-
the condenser aperture. Reference: tionally, the transparency of the
Zou et al., 2017, Scientific Reports 7: structures can be adjusted to allow
7654. deeper insights into the image. This
allows a realistic representation of
Tile region the structures in the image imitating
In a tile experiment a tile region real-life observations. It also allows a
refers to a group of individual image realistic and, in contrast to the maxi-
fields (tiles) that belong together and mum projection mode, a quantitative
are arranged in the form of a grid. display of the volume.
With the help of tile regions it is pos-
sible to acquire areas with dimen- Widefield
sions that exceed the size of an indi- Classical microscopes frequently are
vidual image field. Within an experi- called widefield microscopes in order
ment a number of tile regions can be to distinguish them from microscope
acquired at various positions on the systems with optical sectioning capa-
sample. Each tile region is based on bility such as laser scanning micro-
an X and Y coordinate of the stage scopes. In contrast to such systems
and a Z coordinate of the focus drive. widefield microscopes do not possess
Tile regions are defined using the the ability to discriminate betwee im-
Tiles tool. After acquisition the indi- age information in the axial (=Z) di-
vidual tile regions are displayed as rection leading to blurred images and
scenes. therefore are only poor 3D imaging
systems per se. There are methods to
Tone Mapping add this missing axial sectioning abil-
Tone mapping refers to the compres- ity to widefield microscoped such as
sion of the dynamic range of high 3D deconvolution or structured illu-
contrast images (HDR). The contrast mination (ApoTome, Elyra-S)
range is reduced in order to display
digital HDR images on output devices
with a more limited dynamic range.
Tone Mapping applies a dynamic
range filter to the rendering result to
ZEISS
ZEISS is an internationally leading
technology enterprise operating in
the fields of optics and optoelectron-
ics. Further information about ZEISS
can be found at www.zeiss.com.
Index
Numerics FLIM 1237
Optimize confocal acquisition settings
2.5D View 999
1221
2D View 990
Preview image 1085
Context menu 990
Time Series images 390
Graphical elements context menu 992
Z-Stack images 395
3D
Acquisition Mode 874
Analysis objects table 528
Acquisition monitoring 1104
Analysis tab 528
Acquisition Sequence 871
3D Image Analysis 525
Acquisition tab 859
Create setting 525
Acquisition tools
Export results 527
Focus 938
Perform analysis 527
Stage 985
View results 527
Activate
3D point alignement 645
Alignment process 635
3D View 529
Adapt
Animating the 3D Volume 529
Focus values 937
Key Layout / Controls for Flight Mode
Focus values with Definite Focus 938
531
Focus values with Software Autofocus
Left tool bar 531
937
Right tool bar 532
Adaptive focus point distribution 1091
Tool bar (bottom) 532
Add
3Dxl 529
Dataset to ZEN Connect project 628
3Dxl Plus module 545
Images to ZEN Connect project 626
Measurement in ZEN Connect project
A 649
Access token 206 Tag to ZEN Data Storage image 274
ACP X Unscaled 440 Users to a group 45
ACP X Unscaled WCS 441 ZEN Data Storage data to collection
ACP Y Unscaled 441 276
ACP Y Unscaled WCS 441 Add Annotations 35
Acquire Add Dye or Contrast Method 869
Live panorama image automatically Add to collection dialog 281
260 Adjust
Acquire a first camera image 33 Scan settings 1093
Acquire ApoTome images 1054 Settings for coarse focus map 1090
Acquire Multi-channel Images 51 Settings for fine focus map 1092
Acquire Tile Image 259 Adjust Camera Orientation 344
Acquire Tiles Images Adjust live image settings
Adjusting Z Positions 341 Brightness 36
Assigning Categories to Tile Regions Contrast 36
and Positions 355 Gamma 36
Re-Positioning of your sample carrier Adjust Z positions 341
after incubation 357 Adjustment
Tiles & Positions with Advanced Setup Airyscan detector 1294
344 Collimator 1290
Using Sample Carriers 356 Pinhole 1291
Acquiring Panorama Images with ZEN lite Readback signals 1299
255 Advanced Processing 471
Acquiring the Panorama Image 258 Advanced Scan Profile Editor 1120
Acquisition