|
6 | 6 |
|
7 | 7 | [](https://pypi.python.org/pypi/imswitchclient) |
8 | 8 |
|
| 9 | + |
| 10 | + |
| 11 | +## Try on GOOGLE COLAB: |
| 12 | + |
| 13 | +Hit this link and test: |
| 14 | + |
| 15 | +<a target="_blank" href="https://colab.research.google.com/drive/1W3Jcw4gFn0jtQXa3_2aCtJYJglMNGkXr?usp=sharing"> |
| 16 | + <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
| 17 | +</a> |
| 18 | + |
| 19 | + |
| 20 | +<a target="_blank" href="https://colab.research.google.com/github/openUC2/imswitchclient/blob/main/examples/StageCalibration.ipynb"> |
| 21 | + <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
| 22 | +</a> |
| 23 | + |
9 | 24 | ## Features |
10 | 25 |
|
11 | 26 | - **Remote Control**: Interface with ImSwitch through REST API endpoints. |
@@ -118,14 +133,289 @@ The ImSwitch API provides access to various components: |
118 | 133 | - `setLaserValue(name, value)` - Set laser intensity. |
119 | 134 |
|
120 | 135 | ### Recording Manager |
121 | | -- `snapNumpyToFastAPI()` - Capture an image. |
122 | | -- `startRecording()` - Begin recording. |
| 136 | +- `snapNumpyToFastAPI(resizeFactor)` - Capture an image as numpy array. |
| 137 | +- `startRecording(save_format)` - Begin recording with optional save format (SaveFormat enum). |
123 | 138 | - `stopRecording()` - Stop recording. |
| 139 | +- `snapImageToPath(file_name)` - Snap image and save to specified path. |
| 140 | +- `startVideoStream()` - Start MJPEG video stream. |
| 141 | +- `stopVideoStream()` - Stop MJPEG video stream. |
| 142 | +- `getVideoFrame()` - Get current frame from video stream. |
| 143 | + |
| 144 | +#### SaveFormat Enum |
| 145 | +The recording manager supports multiple save formats: |
| 146 | +- `SaveFormat.TIFF` - TIFF format |
| 147 | +- `SaveFormat.HDF5` - HDF5 format |
| 148 | +- `SaveFormat.ZARR` - ZARR format |
| 149 | +- `SaveFormat.MP4` - MP4 video format |
| 150 | +- `SaveFormat.PNG` - PNG format |
| 151 | +- `SaveFormat.JPG` - JPEG format |
| 152 | + |
| 153 | +### Settings Manager |
| 154 | +- `getDetectorNames()` - Get available detector names. |
| 155 | +- `setDetectorBinning(detector_name, binning)` - Set detector binning. |
| 156 | +- `setDetectorExposureTime(detector_name, exposure_time)` - Set detector exposure time. |
| 157 | +- `setDetectorGain(detector_name, gain)` - Set detector gain. |
| 158 | +- `setDetectorParameter(detector_name, parameter_name, value)` - Set generic detector parameter. |
| 159 | +- `setDetectorROI(detector_name, x, y, w, h)` - Set detector Region of Interest. |
124 | 160 |
|
125 | 161 | ### View Manager |
126 | | -- `setLiveViewActive(status)` - Enable live view. |
127 | | -- `setLiveViewCrosshairVisible(status)` - Show/hide crosshair. |
128 | | -- `setLiveViewGridVisible(status)` - Show/hide grid. |
| 162 | +- `setLiveViewActive(active)` - Enable/disable live view. |
| 163 | +- `setLiveViewCrosshairVisible(visible)` - Show/hide crosshair in live view. |
| 164 | +- `setLiveViewGridVisible(visible)` - Show/hide grid in live view. |
| 165 | + |
| 166 | +### LED Matrix Manager |
| 167 | +- `setAllLED(state, intensity)` - Set all LEDs with specified state and intensity. |
| 168 | +- `setAllLEDOff()` - Turn off all LEDs. |
| 169 | +- `setAllLEDOn()` - Turn on all LEDs. |
| 170 | +- `setIntensity(intensity)` - Set LED intensity. |
| 171 | +- `setLED(led_id, state)` - Set specific LED with ID and state. |
| 172 | +- `setSpecial(pattern, intensity, get_return)` - Set special LED pattern. |
| 173 | + |
| 174 | +### Communication Manager |
| 175 | +- `acquireImage()` - Acquire an image through communication channel. |
| 176 | +- `getImage()` - Get an image from communication channel. |
| 177 | + |
| 178 | +### Experiment Controller |
| 179 | +- `forceStopExperiment()` - Force stop current experiment. |
| 180 | +- `getExperimentStatus()` - Get current experiment status. |
| 181 | +- `getHardwareParameters()` - Get hardware parameters. |
| 182 | +- `pauseWorkflow()` - Pause current workflow. |
| 183 | +- `resumeExperiment()` - Resume paused experiment. |
| 184 | +- `stopExperiment()` - Stop current experiment. |
| 185 | +- `startWellplateExperiment(experiment_data)` - Start wellplate experiment. |
| 186 | +- `startWellplateExperimentWithScanCoordinates(...)` - Start wellplate experiment with scan coordinates. |
| 187 | + |
| 188 | +### HistoScan Manager |
| 189 | +- `stopHistoScan()` - Stop current histo scan. |
| 190 | +- `startStageScanningPositionlistbased(positionList, nTimes, tPeriod, illuSource)` - Start stage scanning. |
| 191 | +- `startStageMapping()` - Start stage mapping. |
| 192 | +- `getStatusScanRunning()` - Get scan running status. |
| 193 | + |
| 194 | +### Objective Controller |
| 195 | +- `calibrateObjective(homeDirection, homePolarity)` - Calibrate objective. |
| 196 | +- `getCurrentObjective()` - Get current objective. |
| 197 | +- `getStatus()` - Get objective status. |
| 198 | +- `moveToObjective(slot)` - Move to specific objective slot. |
| 199 | +- `setPositions(x1, x2, z1, z2, isBlocking)` - Set objective positions. |
| 200 | + |
| 201 | +## Advanced Examples |
| 202 | + |
| 203 | +### XY Scanning and Image Stitching |
| 204 | + |
| 205 | +```python |
| 206 | +import imswitchclient.ImSwitchClient as imc |
| 207 | +from imswitchclient.recordingManager import SaveFormat |
| 208 | +import numpy as np |
| 209 | +import matplotlib.pyplot as plt |
| 210 | + |
| 211 | +# Initialize client |
| 212 | +client = imc.ImSwitchClient(host="192.168.1.100", port=8001) |
| 213 | + |
| 214 | +# XY scanning parameters |
| 215 | +start_x, start_y = 0, 0 # Starting position in µm |
| 216 | +step_size = 100 # Step size in µm |
| 217 | +nx, ny = 5, 5 # Number of steps in X and Y |
| 218 | + |
| 219 | +# Get positioner name |
| 220 | +positioner_names = client.positionersManager.getAllDeviceNames() |
| 221 | +positioner_name = positioner_names[0] |
| 222 | + |
| 223 | +# Setup recording |
| 224 | +client.recordingManager.startRecording(SaveFormat.TIFF) |
| 225 | + |
| 226 | +# Perform XY scan |
| 227 | +images = [] |
| 228 | +positions = [] |
| 229 | + |
| 230 | +for i in range(nx): |
| 231 | + for j in range(ny): |
| 232 | + # Calculate target position |
| 233 | + target_x = start_x + i * step_size |
| 234 | + target_y = start_y + j * step_size |
| 235 | + |
| 236 | + # Move to position |
| 237 | + client.positionersManager.movePositioner( |
| 238 | + positioner_name, "X", target_x, is_absolute=True, is_blocking=True |
| 239 | + ) |
| 240 | + client.positionersManager.movePositioner( |
| 241 | + positioner_name, "Y", target_y, is_absolute=True, is_blocking=True |
| 242 | + ) |
| 243 | + |
| 244 | + # Capture image |
| 245 | + image = client.recordingManager.snapNumpyToFastAPI() |
| 246 | + images.append(image) |
| 247 | + positions.append((target_x, target_y)) |
| 248 | + |
| 249 | +# Stop recording |
| 250 | +client.recordingManager.stopRecording() |
| 251 | + |
| 252 | +# Simple stitching (concatenate images) |
| 253 | +stitched_image = np.zeros((nx * image.shape[0], ny * image.shape[1])) |
| 254 | +for idx, img in enumerate(images): |
| 255 | + i, j = idx // ny, idx % ny |
| 256 | + start_row, end_row = i * img.shape[0], (i + 1) * img.shape[0] |
| 257 | + start_col, end_col = j * img.shape[1], (j + 1) * img.shape[1] |
| 258 | + stitched_image[start_row:end_row, start_col:end_col] = img |
| 259 | + |
| 260 | +# Display result |
| 261 | +plt.figure(figsize=(12, 8)) |
| 262 | +plt.imshow(stitched_image, cmap='gray') |
| 263 | +plt.title('Stitched XY Scan') |
| 264 | +plt.axis('off') |
| 265 | +plt.show() |
| 266 | +``` |
| 267 | + |
| 268 | +### Autofocus Example |
| 269 | + |
| 270 | +```python |
| 271 | +import imswitchclient.ImSwitchClient as imc |
| 272 | +import numpy as np |
| 273 | + |
| 274 | +# Initialize client |
| 275 | +client = imc.ImSwitchClient(host="192.168.1.100", port=8001) |
| 276 | + |
| 277 | +def calculate_focus_score(image): |
| 278 | + """Calculate focus score using Laplacian variance""" |
| 279 | + gray = image if len(image.shape) == 2 else np.mean(image, axis=2) |
| 280 | + return np.var(np.gradient(gray)) |
| 281 | + |
| 282 | +def autofocus_scan(client, positioner_name, z_min, z_max, z_steps=20): |
| 283 | + """Perform autofocus by scanning Z positions""" |
| 284 | + z_positions = np.linspace(z_min, z_max, z_steps) |
| 285 | + focus_scores = [] |
| 286 | + |
| 287 | + for z_pos in z_positions: |
| 288 | + # Move to Z position |
| 289 | + client.positionersManager.movePositioner( |
| 290 | + positioner_name, "Z", z_pos, is_absolute=True, is_blocking=True |
| 291 | + ) |
| 292 | + |
| 293 | + # Capture image and calculate focus score |
| 294 | + image = client.recordingManager.snapNumpyToFastAPI() |
| 295 | + score = calculate_focus_score(image) |
| 296 | + focus_scores.append(score) |
| 297 | + |
| 298 | + print(f"Z={z_pos:.2f} µm, Focus Score={score:.2f}") |
| 299 | + |
| 300 | + # Find best focus position |
| 301 | + best_idx = np.argmax(focus_scores) |
| 302 | + best_z = z_positions[best_idx] |
| 303 | + |
| 304 | + # Move to best focus |
| 305 | + client.positionersManager.movePositioner( |
| 306 | + positioner_name, "Z", best_z, is_absolute=True, is_blocking=True |
| 307 | + ) |
| 308 | + |
| 309 | + print(f"Best focus at Z={best_z:.2f} µm") |
| 310 | + return best_z, focus_scores |
| 311 | + |
| 312 | +# Usage example |
| 313 | +positioner_names = client.positionersManager.getAllDeviceNames() |
| 314 | +positioner_name = positioner_names[0] |
| 315 | + |
| 316 | +# Perform autofocus |
| 317 | +best_z, scores = autofocus_scan(client, positioner_name, z_min=0, z_max=100, z_steps=20) |
| 318 | +``` |
| 319 | + |
| 320 | +### Time-lapse Recording with LED Control |
| 321 | + |
| 322 | +```python |
| 323 | +import imswitchclient.ImSwitchClient as imc |
| 324 | +from imswitchclient.recordingManager import SaveFormat |
| 325 | +import time |
| 326 | + |
| 327 | +# Initialize client |
| 328 | +client = imc.ImSwitchClient(host="192.168.1.100", port=8001) |
| 329 | + |
| 330 | +# Setup LED illumination |
| 331 | +client.ledMatrixManager.setAllLEDOff() |
| 332 | +client.ledMatrixManager.setSpecial("brightfield", intensity=128) |
| 333 | + |
| 334 | +# Configure detector settings |
| 335 | +detector_names = client.settingsManager.getDetectorNames() |
| 336 | +if detector_names: |
| 337 | + detector = detector_names[0] |
| 338 | + client.settingsManager.setDetectorExposureTime(detector, 50.0) |
| 339 | + client.settingsManager.setDetectorGain(detector, 1.0) |
| 340 | + |
| 341 | +# Setup time-lapse parameters |
| 342 | +interval_seconds = 60 # 1 minute intervals |
| 343 | +total_duration_minutes = 60 # 1 hour total |
| 344 | +num_timepoints = total_duration_minutes |
| 345 | + |
| 346 | +# Start recording |
| 347 | +client.recordingManager.startRecording(SaveFormat.TIFF) |
| 348 | + |
| 349 | +for timepoint in range(num_timepoints): |
| 350 | + print(f"Capturing timepoint {timepoint + 1}/{num_timepoints}") |
| 351 | + |
| 352 | + # Capture image |
| 353 | + image_path = f"timelapse_t{timepoint:03d}.tiff" |
| 354 | + client.recordingManager.snapImageToPath(image_path) |
| 355 | + |
| 356 | + # Wait for next timepoint (except for the last one) |
| 357 | + if timepoint < num_timepoints - 1: |
| 358 | + time.sleep(interval_seconds) |
| 359 | + |
| 360 | +# Stop recording and turn off LEDs |
| 361 | +client.recordingManager.stopRecording() |
| 362 | +client.ledMatrixManager.setAllLEDOff() |
| 363 | +print("Time-lapse recording completed!") |
| 364 | +``` |
| 365 | + |
| 366 | +### Multi-Position Experiment |
| 367 | + |
| 368 | +```python |
| 369 | +import imswitchclient.ImSwitchClient as imc |
| 370 | +from imswitchclient.recordingManager import SaveFormat |
| 371 | + |
| 372 | +# Initialize client |
| 373 | +client = imc.ImSwitchClient(host="192.168.1.100", port=8001) |
| 374 | + |
| 375 | +# Define multiple positions of interest |
| 376 | +positions = [ |
| 377 | + {"name": "sample1", "x": 1000, "y": 2000, "z": 50}, |
| 378 | + {"name": "sample2", "x": 3000, "y": 4000, "z": 52}, |
| 379 | + {"name": "sample3", "x": 5000, "y": 1000, "z": 48}, |
| 380 | +] |
| 381 | + |
| 382 | +# Get positioner |
| 383 | +positioner_names = client.positionersManager.getAllDeviceNames() |
| 384 | +positioner_name = positioner_names[0] |
| 385 | + |
| 386 | +# Start recording session |
| 387 | +client.recordingManager.startRecording(SaveFormat.HDF5) |
| 388 | + |
| 389 | +for pos in positions: |
| 390 | + print(f"Moving to position: {pos['name']}") |
| 391 | + |
| 392 | + # Move to position |
| 393 | + client.positionersManager.movePositioner( |
| 394 | + positioner_name, "X", pos["x"], is_absolute=True, is_blocking=True |
| 395 | + ) |
| 396 | + client.positionersManager.movePositioner( |
| 397 | + positioner_name, "Y", pos["y"], is_absolute=True, is_blocking=True |
| 398 | + ) |
| 399 | + client.positionersManager.movePositioner( |
| 400 | + positioner_name, "Z", pos["z"], is_absolute=True, is_blocking=True |
| 401 | + ) |
| 402 | + |
| 403 | + # Capture multiple images with different settings |
| 404 | + for channel in ["brightfield", "fluorescence"]: |
| 405 | + if channel == "brightfield": |
| 406 | + client.ledMatrixManager.setSpecial("brightfield", intensity=100) |
| 407 | + else: |
| 408 | + client.ledMatrixManager.setSpecial("fluorescence", intensity=200) |
| 409 | + |
| 410 | + # Capture image |
| 411 | + image_name = f"{pos['name']}_{channel}.tiff" |
| 412 | + client.recordingManager.snapImageToPath(image_name) |
| 413 | + |
| 414 | +# Clean up |
| 415 | +client.recordingManager.stopRecording() |
| 416 | +client.ledMatrixManager.setAllLEDOff() |
| 417 | +print("Multi-position experiment completed!") |
| 418 | +``` |
129 | 419 |
|
130 | 420 | ## Contributing |
131 | 421 |
|
|
0 commit comments