Project

# Title Team Members TA Documents Sponsor
28 A climbing robot for building 3d printed concrete wall
Benhao Lu
Jianye Chen
Shenghua Ye
Zhenghao Zhang
design_document1.pdf
final_paper1.pdf
final_paper2.pdf
proposal1.pdf
Binbin Li
## Members:

- Jianye Chen (jianyec2)
- Zhenghao Zhang (zz84)
- Shenghua Ye (sye14)
- Benhao Lu (benhaol2)

## Project title
A climbing robot for building 3d printed concrete wall

## PROBLEM:
Current 3D printing construction, while effective in reducing construction waste and improving efficiency, faces challenges in adapting to complex architectural forms and constructing tall buildings. The existing equipment is limited in spatial adaptability, especially when dealing with the irregularities and textures of 3D printed concrete structures. The need for a versatile climbing and printing system for high-rise and complex architectural construction is a pressing issue in the construction industry.

## SOLUTION OVERVIEW:
This project proposes an innovative climbing and self-supporting 3D printing system for construction. The system comprises a versatile mobile unit, including a climbing device for adapting to complex facades and a movable support system for irregular plans. The climbing device ensures stable ascent through power-driven surface adaptation and load-bearing anchors. The support system includes telescopic rails, pulleys, lifting columns, and a robotic arm for diverse construction needs. The construction system integrates material feeding, real-time printing feedback, and precise steel bar placement. The control system, based on GPS, facilitates targeted positioning, enabling intelligent construction of complex spatial structures. Overall, this solution aims to enhance 3D printing adaptability, revolutionizing construction methods for diverse architectural forms.

## SOLUTION COMPONENTS:
The proposed solution consists of the following components:

## MOBILE SYSTEM:
Climbing and lifting device with power drive, surface climbing, and load-bearing anchor lock modules. Construction support device with telescopic rails, universal pulleys, rigid lifting columns, and a multifunctional construction robotic arm.

## CONSTRUCTION SYSTEM:
Material feeding device for adjusting material flow. Printing device for real-time feedback on additive construction accuracy. Reinforcement device for positioning and laying steel bars.

## CONTROL SYSTEM:
GPS-based control system for precise positioning and printing control.

In summary, this project aims to revolutionize 3D printing construction by providing a climbing and self-supporting printing system capable of adapting to complex architectural forms and surface textures, offering a new paradigm for industrialized building construction.

## CRITERION OF SUCCESS
1. INITIALIZATION AND PRINTING COMMAND:
Receive input for architectural details and parameters.
Perform self-checks and initiate the printing command.
2. PRINTING CONSTRUCTION EXECUTION:
Execute printing at 0-1m height with moving and printing devices.
Wait for concrete to reach the desired strength.
3. SELF-CLIMBING AND CONNECTION TO SMART FEEDING SYSTEM:
Move to the self-climbing start.
Lift to the designated position.
4. HORIZONTAL MOVEMENT AND PRINTING ADJUSTMENT:
Detect and compensate for X-Y-Z oscillations.
Use TOF camera for accuracy and adjust concrete flow.
5. TASK COMPLETION AND SELF-CLIMBING:
After printing, perform downward pressure.
Retract the horizontal movement device.
## DISTRIBUTION OF WORK
1. JIANYE CHEN: MECHANICAL DESIGN AND MANUFACTURE
a) Jianye specializes in mechanical design and manufacturing aspects of the project. b) His expertise includes creating detailed mechanical plans, prototyping, and ensuring the physical components are well-crafted.

2. ZHENGHAO ZHANG: MECHANICAL DESIGN AND MANUFACTURE
a) Zhenghao complements Jianye's skills in mechanical design and manufacture. b) Together with Jianye, they form a strong team handling the physical aspects of the project, ensuring its mechanical components are robust and functional.

3. SHENGHUA YE: PCB AND DIGITAL HARDWARE
a) Shenghua focuses on the PCB and digital hardware aspects of the project. b) His expertise includes designing and implementing the electronic components, ensuring seamless integration with the mechanical elements.

4. BENHAO LU: SOFTWARE
a) Benhao specializes in the software part related to printing. b) His role involves developing the necessary software for the printing process, optimizing functionality, and ensuring a user-friendly interface.

A Wearable Device Outputting Scene Text For Blind People

Hangtao Jin, Youchuan Liu, Xiaomeng Yang, Changyu Zhu

A Wearable Device Outputting Scene Text For Blind People

Featured Project

# Revised

We discussed it with our mentor Prof. Gaoang Wang, and got a solution to solve the problem

## TEAM MEMBERS (NETID)

Xiaomeng Yang (xy20), Youchuan Liu (yl38), Changyu Zhu (changyu4), Hangtao Jin (hangtao2)

## INSTRUCTOR

Prof. Gaoang Wang

## LINK

This idea was pitched on Web Board by Xiaomeng Yang.

https://courses.grainger.illinois.edu/ece445zjui/pace/view-topic.asp?id=64684

## PROBLEM DESCRIPTION

Nowadays, there are about 12 million visually disabled people in China. However, it is hard for us to see blind people in the street. One reason is that when the blind people are going to the location they are not familiar with, it is difficult for blind people to figure out where they are. When blind people travel, they are usually equipped with navigation equipment, but the accuracy of navigation equipment is not enough, and it is difficult for blind people to find the accurate position of the destination when they arrive near the destination. Therefore, we'd like to make a device that can figure out the scene text information around the destination for blind people to reach the direct place.

## SOLUTION OVERVIEW

We'd like to make a device with a micro camera and an earphone. By clicking a button, the camera will take a picture and send it to a remote server to process through a communication subsystem. After that, text messages will be extracted and recognized from the pictures using neural network, and be transferred to voice messages by Google text-to-speech API. The speech messages will then be sent back through the earphones to the users. The device can be attached to glasses that blind people wear.

The blind use the navigation equipment, which can tell them the location and direction of their destination, but the blind still need the detail direction of the destination. And our wearable device can help solve this problem. The camera is fixed to the head, just like our eyes. So when the blind person turns his head, the camera can capture the text of the scene in different directions. Our scenario is to identify the name of the store on the side of the street. These store signs are generally not tall, about two stories high. Blind people can look up and down to let the camera capture the whole store. Therefore, no matter where the store name is, it can be recognized.

For example, if a blind person aims to go to a book store, the navigation app will tell him that he arrives the store and it is on his right when he are near the destination. However, there are several stores on his right. Then the blind person can face to the right and take a photo of that direction, and figure out whether the store is there. If not, he can turn his head a little bit and take another photo of the new direction.

![figure1](https://courses.grainger.illinois.edu/ece445zjui/pace/getfile/18612)

![figure2](https://courses.grainger.illinois.edu/ece445zjui/pace/getfile/18614)

## SOLUTION COMPONENTS

### Interactive Subsystem

The interactive subsystem interacts with the blind and the environment.

- 3-D printed frame that can be attached to the glasses through a snap-fit structure, which could holds all the accessories in place

- Micro camera that can take pictures

- Earphone that can output the speech

### Communication Subsystem

The communication subsystem is used to connect the interactive subsystem with the software processing subsystem.

- Raspberry Pi(RPI) can get the images taken by the camera and send them to the remote server through WiFi module. After processing in the remote server, RPI can receive the speech information(.mp3 file).

### Software Processing Subsystem

The software processing subsystem processes the images and output speech, which including two subparts, text recognition part and text-to-speech part.

- A OCR recognition neural network which is able to extract and recognize the Chinese text from the environmental images transported by the communication system.

- Google text-to-speech API is used to transfer the text we get to speech.

## CRITERION FOR SUCCESS

- Use neural network to recognize the Chinese scene text successfully.

- Use Google text-to-speech API to transfer the recognized text to speech.

- The device can transport the environment pictures or video to server and receive the speech information correctly.

- Blind people could use the speech information locate their position.