Checkout, Awards, and Pizza

Description

Checkout Day usually occurs on Reading Day after Final Reports have been turned in. You'll need to turn in your lab notebook, return all of the major parts belonging to the university to your TA, and make sure that nothing is missing from your lab kit before you will receive a grade in the class. After Checkout, a short Awards ceremony will be held to honor those who, in the course staff's opinions, have managed to deliver exemplary projects during the semester. Immediately afterwards, you are all welcome to celebrate your project completion with free pizza and soda with the rest of you classmates!

Requirements and Grading

The Checkout Form will be your documentation that you turned in the lab kit - both you and your TA need to sign it before you turn it in to your TA.

Submission and Deadlines

The specific date and time for lab checkout can be found on the the course calendar.

Autonomous Behavior Supervisor

Shengjian Chen, Xiaolu Liu, Zhuping Liu, Huili Tao

Featured Project

## Team members

- Xiaolu Liu (xiaolul2)

- Zhuping Liu(zhuping2)

- Shengjian Chen(sc54)

- Huili Tao(huilit2)

## Problem:

In many real-life scenarios, we need AI systems not only to detect people, but also to monitor their behavior. However, today's AI systems are only able to detect faces but are still lacking the analysis of movements, and the results obtained are not comprehensive enough. For example, in many high-risk laboratories, we need to ensure not only that the person entering the laboratory is identified, but also that he or she is acting in accordance with the regulations to avoid danger. In addition to this, the system can also help to better supervise students in their online study exams. We can combine the student's expressions and eyes, as well as his movements to better maintain the fairness of the test.

## Solution Overview:

Our solution for the problem mentioned above is an Autonomous Behavior Supervisor. This system mainly consists of a camera and an alarm device. Using real-time photos taken by the camera, the system can perform face verification on people. When the person is successfully verified, the camera starts to monitor the person's behavior and his interaction with the surroundings. Then the system determines whether there is a dangerous action or an unreasonable behavior. As soon as the system determines that there are something uncommon, the alarm will ring. Conversely, if the person fails verification (ie, does not have permission), the words "You do not have permission" will be displayed on the computer screen.

## Solution Components:

### Identification Subsystem:

- Locate the position of people's face

- Identify whether the face of people is recorded in our system

The camera will capture people's facial information as image input to the system. There exists several libraries in Python like OpenCV, which have lots of useful tools. The identification progress has 3 steps: firstly, we establish the documents of facial information and store the encoded faceprint. Secondly, we camera to capture the current face image, and generate the face pattern coding of the current face image file. Finally, we compare the current facial coding with the information in the storage. This is done by setting of a threshold. When the familiarity exceeds the threshold, we regard this person as recorded. Otherwise, this person will be banned from the system unless he records his facial information to our system.

### Supervising Subsystem

- Capture people's behavior

- Recognize the interaction between human and object

- Identify what people are doing

This part is the capture and analysis of people's behavior, which is the interaction between people and objects. For the algorithm, we decided initially to utilize that based on VSG-Net or other developed HOI models. To make it suitable for our system or make some improvement, we need analysis and adjustment of the models. For the algorithm, it is a multi-branch network: Visual Branch: extracting visual features from people, objects, and the surrounding environment. Spatial Attention Branch: Modeling the spatial relationship between human-object pairs. Graph Convolutional Branch: The scene was treated as a graph, with people and objects as nodes, and modeling the structural interactions. This is a computational work that needs the training on dataset and applies to the real system. It is true that the accuracy may not be 100% but we will try our best to improve the performance.

### Alarming Subsystem

- Staying normal when common behaviors are detected

- Alarming when dangerous or non-compliant behaviors are detected

It is an alarm apparatus connected to the final of our system, which is used to report dangerous actions or behaviors that are not permitted. If some actions are detected in supervising system like "harm people", "illegal experimental operation", and "cheating in exams", the alarming system will sound a warning to let people notice that. To achieve this, a "dangerous action library" should be prepared in advance which contains dangerous behaviors, when the analysis of actions in supervising system match some contents in the action library, the system will alarm to report.

## Criteria of Success:

- Must have a human face recognition system and determine whether the person is in the backend database

- The system will detect the human with the surrounding objects on the screen and analyze the possible interaction between these items.

- Based on the interaction, the system could detect the potentially dangerous action and give out warnings.

## DIVISION OF LABOR AND RESPONSIBILITIES

All members should contribute to the design and process of the project, we meet regularly to discuss and push forward the process of the design. Each member is responsible for a certain part but it doesn't mean that this is the only work for him/her.

- Shengjian Chen: Responsible for the facial recognition part of the project.

- Huili Tao: HOI algorithm modification and apply that to our project

- Zhuping Liu: Hardware design and the connectivity of the project

- XIaolu Liu: Detail optimizing and test of the function.

Project Videos