Project

# Title Team Members TA Documents Sponsor
13 Epicast: Augmented Board Game
Di Wu
Jianli Jin
Yueshen Li
Zihao Zhu
design_document1.pdf
design_document2.pdf
final_paper1.pdf
final_paper2.pdf
other1.pdf
proposal1.pdf
Yushi Cheng
# RFA


## Team Member

- Yueshen Li, yueshen7
- Jianli Jin, jianlij2
- Di Wu, diw10
- Zihao Zhu, zihao9


## Title

Epicast: Augmented Board Game

## Problem

Dungeons & Dragons (D&D) is a game that thrives on the breadth of imagination and the depth of interaction. It empowers players to construct elaborate worlds and characters that stem from the vast expanse of their creativity. This unfettered freedom not only offers a canvas for creation and interaction but also makes each gameplay experience profoundly personal and distinct. However, the richness of these imagined scenes and environments is often bottlenecked by the need for verbal expression and lacks an intuitive, sensory display. This limitation hampers the visualization of the game's full potential, presenting a hurdle for some players and constricting the game's broader allure. In addition, the Dungeon Master (DM) which is one of the players serving as the game's narrative architect, is often burdened with extensive preparatory work, juggling game mechanics with storytelling, which can be arduous and time-consuming.


## Solution Overview

"Project Epicast" is a comprehensive system designed to enhance the Dungeons & Dragons gaming experience. Central to this system is a GPT-powered AI that serves as an automated Dungeon Master, guiding gameplay with intelligent narrative creation and player interaction.

The visual aspect is handled by an overhead projector, capable of displaying intricate game scenes, animations, and simulating actions such as dice rolls directly onto the gaming surface. Its height is adjustable for optimal image quality. Gesture recognition is enabled through a sophisticated camera, allowing for intuitive control and the ability to capture memorable moments. Audio immersion is provided by integrated speakers and microphones for voice commands, narrative flow, and dynamic sound effects. Completing the sensory experience are ambient lights that adjust to the game's mood, providing synchronized lighting effects.

Lastly, while our focus is on enhancing the D&D experience, it's important to recognize that our board game experience-augmenting module has widespread applications. The demand for immersive, enriched board game interactions extends beyond D&D. Our target example of D&D serves as an ideal prototype for a broader market of board games seeking similar enhancements for a more engaging player experience.

## Solution Components

### [Real-time Data Processing System]

Real-time data processing system is used to capture, process and generate data in Real time. It consists of a data transmission module, a sophisticated camera, a projector, an integrated speaker and microphone, and an ambient light set.

- Data transmission module is responsible for transmitting the data obtained from sensors to the processing unit of the system. And it should be responsible for transmitting the next instructions of sensors from processing unit and some data that sensors need to express. This can be done through a wired or wireless connection.

- Camera is the core component of the real-time data processing system, used to capture image or video data. Those data will contain vital information like people’s hand gestures. The camera can be an ordinary USB camera, or it can be a high-resolution, high-speed industrial-grade camera.

- Projector is another core component of the real-time data processing system. Projection is a technique for projecting an image or video onto an object or plane. In our setting, the projector needs to project images on a wall or slab to inform players of the game status. And the processing unit should send those related data to projector in real time. Same as Camera, the projector can be an ordinary USB camera, or it can be a high-resolution, high-speed industrial-grade camera.

- Microphone will gather the voice of people and transform them into digital audio, and speaker will transform the digital data sent by processing unit to real voice. An intergrated speaker and microphone will deal with voice input and output of our whole model.

- Ambient light set is used to provide ambient light source, provide sufficient lighting conditions that can adjust to the game's mood. So that the light set will provide synchronized lighting effects to enhance the player’s experience. Also, the light set may help to improve the visibility and recognition ability of the image according to the camera feedback and improve the recognition accuracy and accuracy of the real-time recognition system.



### [GPT-Core DM System]

GPT-Core DM system acts as an assistant to Dungeon Masters (DM), providing support and assistance during gameplay. Through modeling and data training, GPT-Core as Dungeon Masters assistant should perform the following some basic functions and complete the game:

- **Generate adventure missions and plot**: The DM can provide some key information to GPT-Core, such as mission type, location, character, etc., and GPT-Core can then generate a complete adventure mission with a reasonable game plot including enemies, puzzles, rewards, etc.

- **Generate player’s character and NPC (Non-Player Character)**: The DM can use GPT-Core to ask players’ requirement of their willing characters, and then generate out their corresponding characters with balanced properties such as their backstory, personality traits, goals, etc. Also, GPT-Core can generate and provide detailed information of many NPCs easily to enhance game quality.

- **Generate other detailed information**: DM can use GPT-Core to generate any detailed information, for an environment such as the layout of the room, decoration, smell, etc. And GPT-Core can generate vivid descriptions of the environment, allowing the player to better engage with the game world. Also, for interaction of player’s character and NPCs, GPT-Core may help to provide detailed descriptions of NPCs’ reactions according to player’s operations. That will make user immerse into game better.



### [Sensor Assistance System]

Most of the time, we need to adjust projector orientation to let it project the screen to place we want or the camara’s orientation to make players’ hand gesture be captured. Design in mechanical engineering involves the physical structure and installation of the projector and the camera. Here are some common design considerations:

- **Mounting bracket**: To securely mount the projector and camera in the desired position, a suitable bracket or mounting bracket needs to be designed. These brackets should be able to adapt to different mounting environments and provide adjustable features to fine-tune projection angles and positions.

- **Adjustment mechanism**: To facilitate adjustment and alignment of the projector and camera, adjustable mechanisms such as rotation and tilt mechanisms can be designed to adjust under different projection angles and positions.

- **Cooling system**: Projector and camera, especially projector, will generate heat during operation, so an effective cooling system may be necessary to be designed to ensure the stable operation of the projector and prevent overheating.

- **Dust and protection**: To protect the projector and camera from dust, moisture and other external factors, it is necessary to design appropriate dust and protection measures, such as filters, seals, etc.

- Other possible small mechanisms can be provided further to assist the rest sensors...


## Criterion for Success

- Hand gesture and audio detections based on AI model are applied to depict players’ action.
- Split game sense projector region and gesture detect region which are supposed to be a multi-module integrated hardware system ought to improve game experience.
- Create a Dungeon Master AI using GPT as a core.
- Create a better experience with a free-moving projector and several ambient lights.

## Distribution of Work

- Yueshen Li will work on identification module and real-time signal transition system

- Jianli Jin will work on GPT-DM model building and data training

- Di Wu will work on hardware appliance construction and data logging system

- Zihao Zhu will work on realistic design and extension module combination.

Keebot, a humanoid robot performing 3D pose imitation

Zhi Cen, Hao Hu, Xinyi Lai, Kerui Zhu

Featured Project

# Problem Description

Life is movement, but exercising alone is boring. When people are alone, it is hard to motivate themselves to exercise and it is easy to give up. Faced with the unprecedented COVID-19 pandemics, even more people have to do sports alone at home. Inspired by "Keep", a popular fitness app with many video demonstrations, we want to build a humanoid robot "Keebot" which can imitate the movements of the user in real time. Compared to a virtual coach in the video, our Keebot can provide physical company by doing the same exercises as the user, thus making exercising alone at home more interesting.

# Solution Overview

Our solution to the create such a movement imitating robot is to combine both computer vision and robotic design. The user's movement is captured by a fixed and stabilized depth camera. The 3D joint position will be calculated from the camera image with the help of some neural networks and depth information from the camera. The 3D joint position data will be translated into the motor angular rotation information and sent to the robot using Bluetooth. The robot realizes the imitation by controlling the servo motors as commanded. Since the 3D position data and mechanical control are not ideal, we leave out the consideration of keeping robot's balance and the robot's trunk will be fixed to a holder.

# Solution Components

## 3-D Pose Info Translator: from depth camera to 3-D pose info

+ RealSense Depth Camera which can get RGB and depth frames

+ A series of pre-processors such as denoising, normalizing and segmentation to reduce the impact of noise and environment

+ Pre-trained 2-D Human Pose Estimation model to convert the RGB frames to 2-D pose info

+ Combine the 2-D pose info with the depth frames to get the 3-D pose info

## Control system: from model to motors

+ An STM32-based PCB with a Bluetooth module and servo motor drivers

+ A mapping from the 3-D poses and movements to the joint parameters, based on Inverse Kinematics

+ A close-loop control system with PID or State Space Method

+ Generate control signals for the servo motors in each joints

## Mechanical structure: the body of the humanoid robot

+ CAD drawings of the robot’s physical structure, with 14 joints (14 DOF).

+ Simulations with the Robotics System Toolbox in MATLAB to test the stability and feasibility of the movements

+ Assembling the robot with 3D print parts, fasteners and motors

# Criterion of Success

+ 3-D pose info and movements are extracted from the video by RealSense Depth Camera

+ The virtual robot can imitate human's movements in MATLAB simulation

+ The physical robot can imitate human's movements with its limbs while its trunk being fixed