Hello everyone, I am working on an experimental project recently, the purpose is to achieve intelligent grasping of objects through the collaboration of 3D scanners and robots. Specifically, I hope to obtain the three-dimensional data of the object through the 3D scanner, and then the robot will grasp the object based on this data. I want to use the existing range2, because it can already be used to achieve basic data collection, and I don’t want to spend too much
But I face some technical challenges, and I hope to get your suggestions:
Data processing: After obtaining the three-dimensional data of the object, how to process this data to accurately identify the grasping point of the object?
Path planning and control: How does the robot plan the path based on the processed data and perform the grasping operation? Are there any recommended control algorithms or open source libraries?
Secondary development: After realizing the basic functions, are there any suggestions for optimizing grasping efficiency and intelligence? For example, how to make the system faster and more accurate?
If you need to know my hardware information, you can check: https://www.revopoint3d.com/pages/handh ... ner-range2
If you have relevant experience or information, can you share it? Thank you very much! Looking forward to your responses and suggestions.
How to achieve 3D vision intelligent grasping
Hello Emily,
Seems like using ROS as a system would be helpful and some real-time processing application would help process data quicker and more efficiently. Also, you would need to feed enough data and enhance the machine learning along with a simulation environment to reach the accuracy you want.
Seems like using ROS as a system would be helpful and some real-time processing application would help process data quicker and more efficiently. Also, you would need to feed enough data and enhance the machine learning along with a simulation environment to reach the accuracy you want.
Hello Emily,
There are different methods for detecting grasp points that can locate grasps irrespective of object identity. One approach involves using a sliding window to identify areas in an RGBD image or a height map where a grasp is likely to be successful. Another approach extrapolates local "grasp prototypes" based on grasp demonstrations provided by humans.
Some suggestions:
1. Consider using heatmaps instead of real images.
2. I believe that an Intel i7 3.5GHz system (with four physical CPU cores) and 16GB of system memory would be a good option for this purpose.
In "Using Geometry to Detect Grasp Poses in 3D Point Clouds":
No Classification:I assume that all hand hypotheses generated by the sampling algorithm are antipodal and pass all hand samples directly to the grasp selection mechanism without classification.
Antipodal: We classify hand hypotheses by evaluating the conditions of Definition 3 directly for each hand and pass the results to grasp selection.
SVM: We classify hand hypotheses using the SVM and pass the results to grasp selection.
I hope you find this information useful.
There are different methods for detecting grasp points that can locate grasps irrespective of object identity. One approach involves using a sliding window to identify areas in an RGBD image or a height map where a grasp is likely to be successful. Another approach extrapolates local "grasp prototypes" based on grasp demonstrations provided by humans.
Some suggestions:
1. Consider using heatmaps instead of real images.
2. I believe that an Intel i7 3.5GHz system (with four physical CPU cores) and 16GB of system memory would be a good option for this purpose.
In "Using Geometry to Detect Grasp Poses in 3D Point Clouds":
No Classification:I assume that all hand hypotheses generated by the sampling algorithm are antipodal and pass all hand samples directly to the grasp selection mechanism without classification.
Antipodal: We classify hand hypotheses by evaluating the conditions of Definition 3 directly for each hand and pass the results to grasp selection.
SVM: We classify hand hypotheses using the SVM and pass the results to grasp selection.
I hope you find this information useful.