
SOPi
Here, you will learn how to acquire images on our Scanning Oblique Plane Illumination (SOPi) light-sheet microscope​.
Introduction to SOPi
SOPi is our scanned oblique-plane illumination light-sheet microscope. It is capable of gentle, rapid volumetric imaging of a variety of specimens: cells, tissue slices, and thicker specimens such as syncytial slime molds, organoids, or pollen grains. To achieve this, SOPi utilizes a thoughtful combination of remote refocusing optics and oblique sample illumination from a single objective.
​
Before learning how to operate the microscope, it is important for a new user to understand the major consideration when using a microscope with a tilted (oblique) frame of reference: on SOPi, one sees tilted slices of the samples in real time as one scans. In contrast to a traditional microscope that displays "top-down" views of the specimens as you scan through them, images on SOPi are taken as tilted slices cutting laterally through the sample along the specimen plane. ​​​

There are 4 consequences of using such an oblique imaging scheme:​​
-
In the Live view, you will see a thinner, sheared cross-section of the specimen rather than the intuitive top-down or bird's eye view.
-
While moving the sample laterally (along the specimen plane) with the joystick, the field of view will not look flat; objects will instead appear to move in and out of view as they cross in and out of the tilted viewing plane.​​​​​
-
When acquiring a volumetric image (i.e., the right blue-green-red sequence), objects will appear to move along the camera chip.
-
Volumetric images must be computationally "fixed" after imaging to return to the natural x,y,z-coordinate system.

1 | Microscope set-up
Turning on the hardware
Filling the MO2-MO3 chamber
Setting up the software
​
1.1. Turning on the hardware
Shown here is a map of the hardware components accessible to the SOPi user:

​​​​1.1.1. Turn on the Hamamatsu camera, ASI stage controller, ThorLabs galvo control box, and, if needed, Tokai Hit stagetop incubator, as shown:

1.1.2. Turn on the Coherent OBIS laser control box, wait a few minutes, then turn the key clockwise to the "on" position. Lights should indicate correct initialization.​​​

​1.2. ​Filling the MO2-MO3 chamber
SOPi makes use of remote refocusing to relay fluorescent light from the specimen through three objectives before reaching the main camera. Because SOPi uses water-dipping lenses, we need to first ensure that MO2-MO3 are immersed in water and that no air bubbles are present between them.​
​Note: Normally, this step is handled by the BioMIID staff before a user begins a regular experiment. However, users may need to refill the chamber themselves periodically when performing long-term experiments (e.g., refill every 6-12 hours).
​
1.2.1. A case of gel-loading (thin) tips and a tube of water are made available on the optical table. Take a P200 pipet from the Tissue Culture room and
insert a gel-loading tip.​​​​

​1.2.2. If the protective covering over MO2-MO3 is put on, carefully slide it out to expose the objectives and their water chamber (a gray gasket with an engraved "OK").
1.2.3. Fill the tip with water gradually, ensuring that there are no air bubbles visible in the tip. Eject and reload water if necessary to remove bubbles.​​


​1.2.4. Without dislodging the gasket, insert the tip into its top hole and inject water without introducing any air bubbles. Retracting the pipet before it fully empties out is helpful.
1.2.5. Repeat until water starts to come out of the hole - the gasket can take around 1 mL of water in total, necessitating 5-6 rounds of injection.
​​​​​​​​​​​1.2.6. Check if there are any air bubbles (that would have risen to the top) by taking ~100 uL of water - no air should enter the tip. Put the water back and remove the pipet.
1.2.7. Gently lay the protective cover back onto MO2-MO3.
​
1.3. Setting up the software​​​
Users need to initialize three apps to fully control SOPi: Nikon Elements AR (the main control software), ThorCam (for the focusing camera), and Coherent Connection (laser control).​
1.3.1. Click on the icons for all 3 apps in the bottom taskbar of the SOPi workstation.
1.3.2. ThorCam and Coherent Connection should open readily, while Nikon Elements will prompt you to confirm the right main camera for image acquisition, a Hamamatsu Fusion. Click OK to proceed.

​1.3.3. You will enter the main graphical interface of Elements, which should look like this:

2 | Sample mounting & finding focus
Dual-camera set-up for focusing and acquiring
Mounting the specimen
Focusing on the specimen
​
2.1. Dual-camera set-up for focusing and acquiring
SOPi has 2 cameras: a ThorLabs camera for focusing the specimen using transmitted light (LED) and a Hamamatsu camera for acquiring images using the light-sheet (laser). We use a sliding prism to guide the light into either camera for its dedicated purpose. When the prism is not interrupting the image path, the light from your specimen reaches the main Hamamatsu camera (State 1); when the prism slides into its path, it bounces the light from your specimen off into the ThorLabs focusing camera (State 2).

​​​2.1.1. You can toggle the position of the prism using the "Slider position" button in the top taskbar of Nikon Elements.

​2.2. ​Mounting the specimen
We offer a number of stages for mounting samples. Some of these have clamps to prevent the sample from moving around during acquisitions. Of special interest are specimen holders with humidifying chambers and CO2 lines (right):

Note: The BioMIID team welcomes strange and unorthodox sample types, and can work with you to develop custom mounting mechanisms for your specimens. We have 3-D printers, tools, hydrogels, silicone, polymers, curing glue, and many other materials at our disposal for such purposes. Contact us to discuss strategies.​
​
2.2.1. Immobilize your specimen onto the stage using any of the available standard or custom mounts. (Shown is a simple slide, which does not really need any special mounting strategy.)​
Note: If using the incubator stage, you will need to fill the outer chamber with water (where the two air inlet tubes point). Then, you will need to use the magnetic gasket provided to prevent drying of the specien.

2.2.2. On Elements, toggle our focusing LED light source using the "DIA" shutter button on the top taskbar, beside the Slider position button.

​2.2.3. Carefully raise the sample toward the objective using the "Up" button on the stage controller, beside the joystick. Don't crash the objective!

​​​​​​2.2.4. If the sample is not already submerged in water or media, take the plastic transfer pipet, fill it with water, and carefully inject water to bridge the sample surface and MO1.

​​​2.2.5. If the water or liquid media does not bridge the specimen and MO1, use the z-control knob on the stage controller to gently raise the specimen until MO1 touches the water. It will "pop in" when the distance is right.​
Note: the z-control knob has a "fast" and "slow" state, controlled by the switch below it.

2.2.6. Use the same z-control knob to bring the sample away from MO1, while keeping the water immersion intact. (We do not know how close or far the specimen is from MO1's focus at the current height, so we recommend starting from a distant position and bringing the specimen closer into focus, rather than guessing which direction leads to the focus.)​
​
2.3. Focusing on the specimen
Now that the specimen is properly mounted and MO1 is immersed in water/media, we can bring the sample to MO1's focus for imaging. We will first use the focusing camera to get as close to the right position as possible.​​
2.3.1. On Elements, turn on both the DIA shutter and Slider position buttons. This will send the image into the focusing camera (State 2):
​​2.3.2. ​On ThorCam, open the focusing camera:​​

2.3.3. This will generate ​the image viewer. Click on the Live button to generate an active, bright-field view of the specimen.

​2.3.4. The specimen will likely appear dark and under-exposed at first. Open the Display Settings and click on the Auto-Scale button to adjust the image brightness.
2.3.5. If the image is still dark after auto-scaling, adjust the exposure of the camera by clicking the Settings button (6th button from the left, diskette with gear) and manually inputting an exposure level. Lower it if the image is too bright, increase it if the image is too dim.

2.3.6. Find the focal plane by bringing the specimen closer to MO1 (clockwise on the stage controller z-knob). Toggle the speed switch to slow down as the specimen moves ever closer. The image should appear very crisp and sharp when in focus.

2.3.7. Confirm focus by bringing the specimen even closer past the putative focal plane, which should cause it to appear slightly blurry. Then, bring it back to the crispest possible point. Iterate as necessary until satisfied with the view.
​2.3.8. Search for a region to image using the x,y-joystick on the stage controller. The joystick also has a button to toggle between fast/slow scrolling. For initial viewing and set-up, we typically like to focus on a region with lots of variation in structure and contrast, such as this view:

[Introduction] [Set-up] [Focusing] [Visualization] [Acquisition] [Set-down] [Analysis]
​
3 | Visualizing the specimen in real time
Focusing an image onto the main camera
Scanning the volume to center an object of interest
Capturing specimen thickness by adjusting the camera ROI
​
3.1. ​​​​Focusing an image onto the main camera
A clear image on the focusing camera indicates that we are also very near the focus with respect to the main camera (within 20 microns or so). The next step is to optimize the image formed on the main camera using Nikon Elements.
​3.1.1. On Coherent Connection, initialize the relevant lasers by clicking their respective Start buttons.

3.1.2. Temporarily reduce the laser power ​by clicking the drop-down arrow to the left of the desired laser line and typing 1 or 2 % on Operating Power.

3.1.3. Back in Elements, switch the image path to State 1 (image to main camera) by turning off the Slider position button.
3.1.4. Turn the bright-field illumination (DIA shutter) off.


3.1.5. Click on the Live button, which will automatically turn on the "Shutter (OBIS)" laser button.​
3.1.6. You will likely see the screen populate with a canvas but no clear image yet. Just like on the focusing camera, auto-scale the contrast of the live image by clicking Elements's auto-scale button (a histogram flanked with lines):​​​​

3.1.7. Structures from the sample will most likely be visible now, albeit neither focused nor centered on the camera's field of view.

3.1.8. Bring the sample into true focus by rolling the z-knob (on Slow mode) until the image is as thin and crisp as you can make it:​

Note: According to the diagram in the Introduction, only a thin, tilted cross-section of the sample (in this case a slide-mounted specimen) will be illuminated and imaged by MO1 at the focus. Do not expect the usual bird's eye view or a large field-of-view along your sample's x,y-plane.
3.1.9. In the ideal case, the focused image of the specimen will lie at the center of the camera's field of view. To check, overlay a crosshair on the image by toggling the Graticule button on the right of the Elements toolbar.

3.2. Scanning the volume to center an object of interest
For flat, thinly cut specimens such as the mounted slide above, we constantly see a thin fluorescent slice that morphs and shifts as we translate the slide using the x,y-joystick of the stage controller. However, when our specimen comprises discrete objects with some thickness (e.g., cells or larger particles), these objects will appear to move in and out of the field of view as we scan, because the light sheet is tilted relative to the sample plane:


Therefore, to center the object in the field of view, a combination of x,y-scanning and z-adjustment is needed.​
3.2.1. Alternatingly move the sample in z- and x- until it is centered on the graticule. Do this by rolling the z-piezo wheel and the x,y-joystick (in the x-direction) in alternating short spurts while both are on their Slow settings.
3.3. Capturing specimen thickness by adjusting the camera ROI
​​​While you can use the "Full Sensor" of the camera and skip the subsequent steps, doing so will unnecessarily slow down the acquisition, bloat the image file size, and send higher doses of light to your specimen. Therefore, we recommend that users constrain the region of the camera receiving images to span only the appropriate thickness of their specimen's features.
3.3.1. Find the Optical Configuration (OC) panel on the Elements window (usually on the bottom-right side). Buttons named "Full Sensor" (entire camera chip) and "Custom" (user-defined region on the camera chip) are available.
3.3.2. Click the drop-down arrow beside Custom ROI (region-of-interest) and select "Define ROI". This will open the Live view with an adjustable bounding box.​​


3.3.3. Drag the bounding box from the top and bottom to shrink it while encapsulating the sample's thickness. Leave some margins on both the top and bottom. Click OK when done.

Note: This will cause a couple of important changes. First, the Live view will now reflect the chosen sample thickness. Second, the minimum exposure time threshold (little tick on the x-axis) shifts leftward, as imaging a smaller region of the camera can be done with shorter camera exposures. ​
3.3.4. We recommend moving the specimen around with the x,y-joystick to make sure that your chosen ROI does not exclude bits of the specimen further away from the current position.
​3.3.5. Turn off the Live view in between inspections to minimize photodamage to the specimen.
4 | Acquiring image data
Writing data to your Nonsync folder
Acquiring test volumes and optimizing acquisition parameters
Imaging multiple stage positions and time points​ using ND JOBS
​
4.1. Writing data to your Nonsync folder
Now that the sample is in focus and centered on the main camera, we will optimize signal collection, visualize some test volumes, and acquire genuine experimental image data. As discussed in the Getting Started guide, we assign users space on our BioMIID Central data storage server. The user's Nonsync folder, which is not constantly monitored for cloud-based backup, is the ideal place to write raw data from the microscopes.
4.1.1. Map a network drive to BioMIID Central​ (Getting Started Step 2.6).
4.1.2. Access your assigned Nonsync folder in
:\BioMIID_Central\BioMIID_Nonsync\BioMIID_Users_Nonsync​
​​​
4.2. Acquiring test volumes and optimizing acquisition parameters
Now that we can visualize oblique slices through the specimen, the next step is to generate some test volumes that minimize sample light exposure while maximizing visualization of the specimen.
4.2.1. Check that you are collecting sufficient signal from the specimen by inspecting the pixel intensity distributions on the LUT/histogram panel:

​In the above example, the specimen's signal spans only ~600 of the available 65,536 pixel intensity values on the camera (~1 %). Thus, we can afford to increase exposure time and laser power by quite a lot, if necessary.​
​​​​4.2.2. On the "OC Panel" tab, click on the channel/color of interest (e.g., GFP). Adjust the camera exposure time via the Elements OC Panel (below) and laser power via Coherent Connection (as in 3.1.2).

4.2.3. There will be a "!" warning sign beside the channel of interest to note that its settings have been changed; click the right arrow to save the settings you have optimized previously. Do the same for any other channel being imaged.
Note: We recommend maximizing coverage of the available pixel intensity range with signal (without saturating), but oftentimes, one will have to prioritize minimizing light dose to the specimen, especially in live/non-fixed experiments or when imaging specimens are prone to photobleaching.

​​​​​​​​4.2.4. Next, we will take some test volumes using the ND Acquisition panel. If not already loaded onto the Elements display, access this panel by clicking on the search bar (top right) and typing "ND Acquisition":

​​​​​​​​​​​The tabs on the left side can be used to control timelapse, multistage, multi-channel (color/wavelength), z-stack, large field of view, and custom hardware triggering, respectively.
4.2.5. Check the z-stack tab to expose sheet sweep controls.
4.2.6. Uncheck the x,y-multistage tab. If checked, a previously saved position may cause the specimen to crash into the objective.
4.2.7. On the z-stack control panel, first make sure of the following:​
- The z-stack mode is set to "Relative", which means the sheet sweeps a specified distance in either direction from a set "Home" position
- Z Device is set to "Triggered NIDAQ Piezo Z"
- Close Active Shutter box is unchecked

​​​​​4.2.8. The z-stack control panel allows users to control two parameters concerning the volumetric scan: step size and sweep range.

4.2.9. Select the appropriate step size for your acquisition. ​Smaller step sizes lead to better-resolved 3-D images, up to a point. Step sizes of 0.5-1 um are usually sufficient for cell- and tissue-scale specimens.

​​4.2.10. Select the appropriate light-sheet sweep range for your acquisition. Go Live, then first "Home" the light sheet to its starting position by clicking on the "Piezo" home button to the right of Z Device on ND Acquisition.

​4.2.11. On Live view, hovering over the live image and scrolling up and down with the mouse wheel will move the light sheet left and right relative to the "Home" position. This can be used to determine the range that covers a discrete object (or objects) you want to capture.

​​4.2.11. "Home" the piezo position once more to re-center the light sheet.​​
4.2.12. Once you have selected some scan settings to test, locate the right side of the ND Acquisition panel, which contains data saving controls.
4.2.13. Check the "Save to File" box if you would like to input a directory to save in, and locate your Nonsync folder.
4.2.14. Click the "Run now" button (bottom right) to acquire the test volume.
4.2.15. The volume will be readily viewable on the Elements desktop:

4.2.16. To render the z-stack as a volume, click on the Volume view button (a sphere with x,y,z-axes) and select "Yes" or "No" to the option of generating a full-resolution file. (No is faster and appropriate for quick visualization.)

​4.2.17. Use the 3-D view to decide on whether you are satisfied with the resolution, field of view, and contrast of the image/s taken:

Note: You may prefer MaxIP or Alpha renderings for your specific sample.
​
4.3. Imaging multiple stage positions and time points​ using ND JOBS
​Once you are happy with your acquisition settings based on test images, you are now ready to acquire a full dataset. You can either use the built-in ND Acquisition tool, or the BioMIID team's streamlined ND JOBS script.
4.3.1. Access the JOBS Explorer on Elements:​

4.3.2. Open "SOPi multistage timelapse".

4.3.3. Click the play button on the bottom-right corner of the resulting window:

This will open a Job Wizard with 4 steps to set up a multi-stage timelapse.

4.3.4. Go through the 4 steps:
1) Pick the location into which image data will be written. We ask that users input their BioMIID_Central Nonsync folders (see set-up for more details).

If the BioMIID_Central drive is not yet mapped, follow the mapping protocol from the set-up page to access the contents of this folder. Choose your assigned Nonsync folder.
2) Click "Time Lapse" on the left panel to proceed to the next step. Here, input how many rounds of imaging ("Loops") and how often you want images taken.

3) Click "ND Acquisition" on the left panel to proceed to the next step.
3A) Here, copy the settings optimized previously from your test acquisitions:​
- Step size
- Number of steps
- Range
3B) If you are taking multi-color images, click the "Lambda" button (under "Timing" and beside "Z"). Add the colors you would like to image by clicking on the checkbox underneath "Opt. Conf.", and select the Optical Configurations (channels & exposure times) previously saved on the OC Panel (step 4.2.2).

4) Click "Alternative Filename" on the left panel to proceed to the next step. Here, provide a name for the experiment, as well as arrange the variables that will be included in the filename when it is written. This will determine the filename parsing you will conduct during batch analysis of the images.

4.3.5. Click on the Run button in the lower left corner to proceed. This will open a new window containing the experiment progress above, and positions for imaging below.

4.3.6. Select stage positions of interest to image. Click on the Live button to view the current position, then scroll through the specimen using the x,y-joystick. Mark positions by clicking the checkboxes under "Point Name".

4.3.7. When all the stage positions have been assigned, first click "Optimize" below the position list, and then save the coordinates by clicking "Save".
4.3.8. Click "Continue" on the bottom right and the imaging will begin.

4.3.9. For long-term imaging experiments, replace the water in MO2:MO3 chamber at the start and end of the workdays.
​
[Introduction] [Set-up] [Focusing] [Visualization] [Acquisition] [Set-down]
​
5 | Microscope set-down
Dismount and clean
Turn off the hardware and software
Inform BioMIID Team for image processing
​
5.1. Dismount the sample and clean the microscope.
5.1.1. Hold the "Down" button on the ASI stage to quickly lower the specimen relative to the objective. Hold it all the way until the stage stops moving.
5.1.2. Remove the specimen and set it aside.
5.1.3. If the incubator stage was used, use the vacuum line provided below the optical table (right) to remove water from the humidifying chamber (left).

5.1.4. Use the lens paper provided to gently wipe the microscope objective (without touching it directly).
​​
5.2. Turn off the various hardware components and control software.
5.2.1. Turn off ThorCam and Nikon Elements software.
5.2.2. On the Coherent control software, turn off the lasers first before turning the key switch to "standby" mode.
5.2.3. Turn off the Coherent software.
5.2.4. Turn off the following hardware components:
- Hamamatsu camera
- ThorLabs galvo control box
- Tokai Hit incubator
- ASI stage
- OBIS laser box
5.3. Inform the BioMIID team that your experiment has concluded so that we can provide image processing support (e.g., image deskewing/unshearing).
​​
[Introduction] [Set-up] [Focusing] [Visualization] [Acquisition] [Set-down]
​
