toplogo
Sign In

Automated Robotic System for Flexible Liquid Handling in Laboratory Settings


Core Concepts
A flexible, modular, and cost-effective robotic system is introduced to automate small-scale laboratory experiments involving manipulation of liquid containers, using computer vision for liquid volume estimation and a simulation-driven pouring approach designed for containers with small openings.
Abstract
The paper presents a novel approach for automating small-scale laboratory experiments involving liquid handling. The key components are: Vision-based Liquid Volume Estimation: A two-step deep learning architecture is proposed, consisting of a Segmentation and Depth Estimation (SDE) network and a Liquid Volume Estimation (LVE) network. The SDE network predicts the segmentation and depth of the transparent container and the contained liquid from a single RGB image. The LVE network then estimates the actual liquid volume based on the output of the SDE network. The authors create a new dataset called LabLiquidVolume with 5,451 real-world images of transparent laboratory containers labeled with the liquid volume. Simulation-driven Pouring for Small Openings: A new pouring strategy is developed that confines the robot arm to rotate around a fixed liquid exit point, inspired by human pouring movements. A large pool of simulated pouring trajectories is generated, and the simulation that best approximates the real-world scenario is selected for execution on the physical robotic setup. The proposed system is integrated with a UR5 robotic arm and evaluated on automated cell culture processes, including media changing and cell passaging. The results demonstrate the effectiveness of the vision-based volume estimation and the simulation-driven pouring approach, with low spilling rates even for containers with small openings.
Stats
The LabLiquidVolume dataset contains 5,451 real-world images of transparent laboratory containers with manually labeled liquid volumes. The vision-based liquid volume estimation model achieves a root-mean-squared error (RMSE) of 17.83 mL and a mean absolute percentage error (MAPE) of 9.39% on the test set. The simulation-driven pouring approach results in a RMSE of 10.8 mL (MAPE = 21%) and a maximum difference of 49.3 mL when executed on the real-world robotic system using a cell culture flask. For the autonomous cell culture workflows, the system achieves a 100% completion rate for analyzing cell growth and changing media, and a 90% completion rate for passaging.
Quotes
"Our work is fully reproducible: we share our code at at https://github.com/DaniSchober/LabLiquidVision and the newly introduced dataset LabLiquidVolume is available at https://data.dtu.dk/articles/dataset/LabLiquidVision/25103102." "Pouring a specific amount of liquid into a container with a small opening is an important and frequently reoccurring manipulation task for robots in laboratory environments."

Deeper Inquiries

How could the vision-based liquid volume estimation be further improved, for example, by incorporating additional sensor modalities or exploring more advanced deep learning architectures?

The vision-based liquid volume estimation can be enhanced by integrating additional sensor modalities to complement the visual data. For instance, incorporating depth sensors like LiDAR or time-of-flight cameras can provide more accurate depth information, especially for transparent containers. This additional depth data can improve the accuracy of volume estimation, particularly in scenarios where the liquid level is challenging to detect visually. By fusing depth information with RGB images, the system can create a more comprehensive understanding of the container's geometry and the liquid's volume. Moreover, exploring more advanced deep learning architectures, such as 3D convolutional neural networks (CNNs) or transformer-based models, could further enhance the performance of the liquid volume estimation system. 3D CNNs can capture spatial information more effectively, which is crucial for accurately estimating the volume of liquids in three-dimensional space. Transformer-based models, known for their ability to handle sequential data efficiently, could be adapted to process the sequential nature of pouring actions and liquid level changes in containers over time. By leveraging these advanced architectures, the system can potentially achieve higher accuracy and robustness in liquid volume estimation tasks.

What are the potential challenges and limitations in scaling up the proposed robotic system to handle a wider range of laboratory tasks and equipment beyond the cell culture use case?

Scaling up the proposed robotic system to accommodate a broader range of laboratory tasks and equipment beyond cell culture presents several challenges and limitations. One key challenge is the diversity of laboratory tasks, each requiring specific manipulation skills and equipment interactions. Adapting the system to handle various tasks would necessitate developing task-specific algorithms and hardware configurations, increasing the complexity of the system. Another challenge is the variability in laboratory equipment and setups. Different laboratories may use unique containers, instruments, and protocols, making it challenging to create a one-size-fits-all robotic solution. Customizing the system for each laboratory's requirements would require significant time and resources. Furthermore, safety considerations become more critical when scaling up the system to handle a wider range of tasks. Working with hazardous materials, delicate samples, or sensitive equipment introduces additional risks that must be carefully managed to ensure the safety of both the system and laboratory personnel. Additionally, the integration of the robotic system with existing laboratory infrastructure and workflows can be complex. Ensuring seamless communication and coordination between the robot and other laboratory devices, such as incubators, microscopes, and sensors, requires robust integration protocols and compatibility testing.

Given the importance of minimizing liquid spills in laboratory settings, how could the simulation-driven pouring approach be extended to handle more complex container shapes and pouring scenarios while maintaining high precision and reliability?

To extend the simulation-driven pouring approach to handle more complex container shapes and pouring scenarios while maintaining precision and reliability, several strategies can be implemented: Advanced Simulation Models: Develop more sophisticated simulation models that account for complex fluid dynamics, container geometries, and material properties. By simulating a wider range of pouring scenarios, including different container shapes and liquid viscosities, the system can learn to adapt to diverse pouring challenges. Machine Learning for Simulation Optimization: Utilize machine learning algorithms to optimize simulation parameters for different container shapes and pouring conditions. By training models on a diverse set of simulation data, the system can learn to predict optimal pouring strategies for various scenarios, improving precision and reliability. Sensor Fusion: Integrate additional sensors, such as force/torque sensors or pressure sensors, to provide real-time feedback during pouring actions. By combining simulation predictions with sensor data, the system can adjust pouring trajectories on-the-fly to account for unexpected variations in container shapes or liquid properties, enhancing precision and reliability. Adaptive Control Strategies: Implement adaptive control strategies that dynamically adjust pouring movements based on feedback from sensors and simulation models. By continuously monitoring and optimizing pouring actions, the system can respond in real-time to changes in the environment, ensuring accurate and spill-free pouring in complex scenarios. By incorporating these advanced techniques, the simulation-driven pouring approach can be extended to handle a wider range of challenges in laboratory settings, maintaining high precision and reliability while minimizing liquid spills.
0