|21||Modular 3D Holographic Display
Taofik Sulaiman (tosulai2), Pavan (pavanh2), Charles Ekwueme (cekwue2)
Displaying objects in 3D formats has tremendous benefits but is severely limited. Current modes of 3D display are expensive and can be disadvantageous, and at times even harmful to certain users, especially if viewed for extended periods of time (e.g. 3D picture via red/blue anaglyph glasses). Anaglyph glasses are very eye tiring and lead to headaches since they distort images or the user’s eye focus. Further, solutions like VR headsets also require wearables which are heavy, also cause eye fatigue and are limited to one user.
Our main goal is to allow users to better visualize objects in a 3D space without the limitations of a 2D screen and without eye fatigue.
Solution Overview -
General description of idea
Our device would take in 3D model files (e.g. STL/CAD file) via USB or other I/O then display them as a hologram projection by converting the 3D model into 4 different 2D images that are then projected into the hologram display. This solves the problem by allowing users to input their own 3D models (as STL/CAD files) and create an interactive display without the use of a wearable or any of the health implications that come with those.
What makes our project unique?
Our project is novel in that we would be taking the simple home made hologram experiment that is available on phones and building a version that can sit on a table to display bigger scenes and allow user input to modify the scene or interact with the object.
Unlike other solutions, our design will decouple the graphics processing and display logic from the control device (i.e. laptop/computer).
Other modes for 3D viewing feature AR/VR devices and 3D images which use glasses.
* Our project will use a 3D model file as an input file in which it will convert this file into a 2D video/image intended to be used for the Holographic display.
* This project would incorporate and require the design a board that interfaces with the holographic display, and possibly a sensor that tracks user motion. To make the project interesting we could combine input from different sensors to account for error.
* We will allow limited manipulation of the projected scene by allowing the user to move the object around the scene. There will be no need to re-process the 2D image back to 3D as the object itself will not be modified however its position or orientation may be.
- This unit will render the hologram via a standard 2D screen and shaped glass. Example
- This subsystem will include any required video driver and 2D, LCD screen
* IO Peripherals
- This subsystem will encompass input to the processing unit for 3D model input or control signals.
- Likely this will be implemented via a USB controller that takes in the serial input from a laptop/computer.
* 3D->2D mapping algorithm
- No specific algorithm is currently in mind however with the use of graphics libraries, specifically OpenGL ES, calculating appropriate projections onboard will be simplified significantly.
* Processing Units
- As a result of graphics processing requirements, likely this will include a low power CPU (e.g. ARM based) and a graphics accelerator of some form.
* Power Subsystem
- Used to power components from other systems reliably. This may include AC/DC converters, wall adapters and or voltage regulators.
Criterion for Success
* Final Milestone 1: Successful static rendering of 3D model to holographic display. (3D to 2D mapping)
* Final Milestone 2: Successful dynamic rendering (changing smoothly and requiring real-time scene calculation) of 3D model to holographic display.
* Final Milestone 3: Accurate user control of the holographic object from serial input or from capacitive sensor input with maximum of 2 second delay.
* Final Milestone 4: Displayed image has decent resolution: i.e. image look clear.
* Final Milestone 5: Can operate for extended periods of time without fail (at least 20 seconds).
* Holographic Display:
* 3D-2D Mapping Algorithm Resources: