Navigating Ethical AI

Role:  
The design and development of the platform;
The design of the cards



Platform Content Framework
The content of the "Virtual Mirror" platform is structured into five key modules based on the Constructivist Learning Environment (CLEs) model: Problem Introduction, Case Studies, Information Resources, Creative Tools, and Offline Discussion. These modules offer a structured approach to understanding Deepfake technology from both technical and ethical perspectives.
Information Resources This module covers three main areas: Deepfake, Generative AI, and Open-Source Stable Diffusion (SD). Students gain a comprehensive understanding of Deepfake, including its concepts, technical principles, and societal impacts, along with a deep dive into the mechanics behind generative AI technologies. Additionally, the module provides extensive case resources exploring the ethical challenges associated with these technologies.



Creative Tools The platform offers two creative tools that allow students to engage in real-world, context-based creative practices through body-interactive experiences. These tools enable students to experiment with how Deepfake content is generated, deepening their understanding of both the technical and ethical implications through hands-on practice.


The creative tool interface is structured for intuitive interaction: The left panel displays a real-time camera feed. The central panel contains the parameter selection area, allowing users to input text, select emotions, or choose character options. The right panel serves as the image output area, where the system processes image generation requests in the background and displays the final Deepfake-generated image upon completion. The system is designed to handle minor input errors and provide feedback to enhance user experience.


The system is designed with modular architecture, separating user interface (UI), image processing, AI generation, and data storage. It follows a client-server model, where the Unity-based front-end communicates with a ComfyUI-based backend for AI image generation.
Tool 1:   "Create Someone Around You"

Using students’ body postures and faces to generate picture.

   


Tool 2:   "Bring Book Characters to Life"

Using students’ imagination as prompt to reimagine classical Chinese literature from middle school textbooks.


Tool 1:  
    Students upload photos and generate Deepfake images by combining various facial expressions and postures. The system uses the ip-adapter faceID for facial feature extraction, ControlNet Depth for posture replication, and Segment Anything technology for background isolation, ensuring accurate and realistic image generation.
Tool 2:


This tool allows students to recreate historical or literary figures by uploading posture images and text prompts. The system employs the ip-adapter for facial control and style transfer, supported by ControlNet Depth and ControlNet OpenPose to enhance posture accuracy. The FaceDetailer plugin further refines facial details, improving the quality and expressiveness of the final images.



This study conducted multiple workshops and designed a set of cards to integrate the curriculum with the platform. Through scenario-based and gamified approaches, participants were guided to design and discuss both the positive and negative applications of technology.


For more information about the platform: https://ji2adg2xlu7.feishu.cn/docx/JLxBd5E2noZ1XfxjC0icIQzpnLb?from=from_copylink




©harmonieyilinzhang0121@gmail.com