Project

# Title Team Members TA Documents Sponsor
37 Visual chatting and Real-time acting Robot
Haozhe Chi
Jiatong Li
Minghua Yang
Zonghai Jing
design_document1.pdf
design_document2.pdf
proposal1.pdf
proposal2.pdf
proposal3.pdf
Gaoang Wang
Group member:
Haozhe Chi, haozhe4
Minghua Yang, minghua3
Zonghai Jing, zonghai2
Jiatong Li, jl180
Problem:
With the rise of large language models (LLMs), Large visual language models (LVLMs) have achieved great success in recent AI development. However, it's still a big challenge to configure an LVLM system for a robot and make all hardware work well around this system. We aim to design an LVLM-based robot that can react to multimodal inputs.
Solution overview:
We aim to deliver an LVLM system (software part), a robot arm for robot actions like grabbing objects (hardware part), a robot movement equipment for moving according to the environment (hardware part), a camera for real-time visual inputs (hardware part), a laser tracker for implicating the object (hardware part), and an audio equipment for audio inputs and outputs (hardware part).
Solution components:
LVLM system:
We will deploy a BLIP-2 based AI model for visual language processing. We will incorporate the strengths of several recent visual-language models, including LlaVA, Videochat, and VideoLlaMA, and design a better real-time visual language processing system. This system should be able to realize real-time visual chatting with less object hallucination.
Robot arm and wheels:
We will use ROS environment to control robot movements. We will apply to use robot arms in ZJUI ECE470 labs and buy certain wheels for moving. We may use four-wheel design or track design.
Camera:
We will configure cameras for real-time image inputs. 3D reconstruction may be needed, depending on our LVLM system design.
If multi-viewed inputs are needed, we will design a better camera configuration.
Audio processing:
We will use two audio processing systems: voice recognition and text-to-audio generation. They are responsible for audio inputs and outputs respectively. We will use certain audio broadcast components to make the robot talk.
Criterion for success:
The robot consists of functions including voice recognition, laser tracking, real-time visual chatting, a multimodal processing system, identifying a certain object, moving and grabbing it, and multi-view camera configuration. All the hardware parts should cooperate well in the final demo. This means that not only every single hardware should function well, but also perform more advanced functions. For instance, the robot should be able to move towards certain objects while chatting with humans.

Filtered Back – Projection Optical Demonstration

Featured Project

Project Description

Computed Tomography, often referred to as CT or CAT scans, is a modern technology used for medical imaging. While many people know of this technology, not many people understand how it works. The concepts behind CT scans are theoretical and often hard to visualize. Professor Carney has indicated that a small-scale device for demonstrational purposes will help students gain a more concrete understanding of the technical components behind this device. Using light rather than x-rays, we will design and build a simplified CT device for use as an educational tool.

Design Methodology

We will build a device with three components: a light source, a screen, and a stand to hold the object. After placing an object on the stand and starting the scan, the device will record three projections by rotating either the camera and screen or object. Using the three projections in tandem with an algorithm developed with a graduate student, our device will create a 3D reconstruction of the object.

Hardware

• Motors to rotate camera and screen or object

• Grid of photo sensors built into screen

• Light source

• Power source for each of these components

• Control system for timing between movement, light on, and sensor readings