2025 OMSCS Conference Poster & Demo Session
3rd Floor Breakout Rooms
Posters
Self-Explanation in the Virtual Experimental Research Assistant (VERA)
Abstract
Self-Explanation in VERA investigates how an AI agent situated in an online learning environment centered around ecological modeling can answer human inquiry about its internal workings. Specifically, this presentation will focus on detailing ongoing research surrounding how supporting self-explanation in VERA can help to improve clarity and trust in the explanations humans receive. The poster will focus on detailing our cognitive approach (Theory of Mind), technical approach via web chatbot agent, and methodology behind why episodic knowledge (knowledge about the user's session) is valuable when asking questions about VERA's internal mechanisms. Attendees will learn about ways they can better support trust and value in humans when building AI agents. They will also learn how content can be structured and recalled using a task, method, and knowledge (TMK) representation. Leveraging TMK when prompting an AI agent for explanation may support additional clarity in how a learner adopts new details. This presentation will be useful for AI education researchers, engineers, and those interested in building trustworthy AI systems. This work is completed within the Design Intelligence Lab led by Dr. Ashok Goel, and is supported by the National AI Institute for Adult Learning and Online Education (AI-ALOE) with funding from the National Science Foundation (NSF).
Bios
Image
![]() | Nick Alico is an OMSCS student at Georgia Tech. He graduated from Penn State University and the Schreyer Honors College with a B.S. in Human-Centered Design and Development. Nick currently works as a UX Engineer at Charles River Analytics. |
Image
![]() | Ruhma Mehek Khan is an MSCS graduate student at Georgia Tech with a Bachelor's in Computer Science and Social Sciences. Her professional experience includes roles as a Research Fellow at Microsoft and a Software Developer at Adobe. |
Multi-Label Plant Species Classification with Self-Supervised Vision Transformers
Abstract
We present a transfer learning approach using a self-supervised Vision Transformer (DINOv2) for the PlantCLEF 2024 competition, focusing on the multi-label plant species classification. Our method leverages both base and fine-tuned DINOv2 models to extract generalized feature embeddings. We train classifiers to predict multiple plant species within a single image using these rich embeddings. To address the computational challenges of the large-scale dataset, we employ Spark for distributed data processing, ensuring efficient memory management and processing across a cluster of workers. Our data processing pipeline transforms images into grids of tiles, classifying each tile, and aggregating these predictions into a consolidated set of probabilities. Our results demonstrate the efficacy of combining transfer learning with advanced data processing techniques for multi-label image classification tasks. Our code is available at github.com/dsgt-kaggle-clef/plantclef-2024.
Bio
Image
![]() | Murilo Gustineli is a Senior AI Software Solutions Engineer at Intel, currently pursuing an OMSCS degree focusing on Machine Learning. He is particularly interested in Deep Learning, information retrieval, and biodiversity research, aiming to improve species identification and support conservation efforts. |
Teaching Programming Through Github
Abstract
I use GitHub actions to create repos that help people learn programming languages by creating pull requests from puzzles in the repo and returning advice to their coding via CICD (you solved the puzzle if all tests pass). I am incorporating chatbot AI feedback in the next year, and I am going through the growing pains of the last two years from making a good product to getting a good audience for hacktoberfest. I think attendees will be able to make their own repos for their own programming languages after seeing the above or looking to help.
Bio
James Hennessy is an ML Engineer at Meta.
Emotion-Cognition Interaction in Humans & AI
Abstract
The role of emotions in problem solving is a relatively neglected area of research in artificial intelligence. This presentation summarizes a comparative analysis of human cognition and artificial intelligence with a focus on the role of emotion-cognition interaction in task-specific problem solving.
Bio
Image
![]() | Maritza Ramirez Mills is an OMSCS alum and incoming graduate student in the philosophy of mind, cognitive and brain sciences. |
Impact of Moderation Strategies on Toxicity: An Agent-Based Simulation
Abstract
Social media's interconnected nature means content moderation policies on one platform can ripple across the entire internet. This poster utilizes Agent-Based Modeling in CMU’s CASOS Lab’s Construct environment to investigate how different moderation strategies impact toxicity both on individual platforms and across the broader digital ecosystem. The simulation models user migration between platforms with varying moderation approaches, capturing how toxic users react to enforcement. The four strategies examined were: full banning, temporary bans, warning systems, and post deletion. Further, varying acceptable toxicity thresholds and moderation failure rates were used to examine the efficacy of different levels of strictness. While content moderation appears to effectively reduce toxicity on the enforcing platform, it could contribute to increased toxicity elsewhere as users migrate to less regulated spaces. These tentative findings suggest that platform-specific moderation efforts, particularly content moderation, may inadvertently contribute to ecosystem-wide toxicity concentration, rather than reduction, highlighting the need for more holistic approaches.
Bio
Image
![]() | Erik Nordby's background is in software development, most recently at UPMC in Pittsburgh working as a rotational software engineer. Within OMSCS, he has been focusing on ML and independent research-focused classes. Outside of working and classes for OMSCS, he volunteers with organizations focused on responsible/safe AI development. |
Demos
Infernal Frontier: A Game Made Possible Through the OMSCS Program
Abstract
Infernal Frontier is a 2-D story-based RPG based in a fictional universe where two best friends are tasked with teleporting across different worlds to save their beloved kingdom. Join Albert and John on their grand adventure as they fight the evil forces of the Inferno pursuant on destroying their world. Players will encounter a gripping story, turn-based combat, and a retro game feel. Stop by the booth to play a demo of the game!
Bio
Image
![]() | John Bachoura, is an OMSCS student who got his start with Computer Science as a self-taught programmer. He has a diverse range of experience with the following disciplines: Robotics, Fullstack Engineering, Game Development, Artificial Intelligence, and Computer Vision. When he's not developing, he enjoys playing guitar and exploring nature. Computer science has become his true passion in life and a source of joy and purpose. |
An Observation Feature Study of Robot Imitation Learning for Autonomous Social Navigation on Construction Sites
Abstract
While social navigation in a complex environment like construction sites is challenging, the design and selection of observation features derived from different sensors become more crucial in an imitation learning system. However, the real-world environment is typically too complicated to adequately represent, necessitating the manual design of features. Existing research either focuses on a certain type of feature or a simple discrete task, thus limiting its applicability to long-horizon, continuous tasks like autonomous social navigation. In this paper, we integrate a simulated environment in the game engine Unity 3D for systematically manipulating different types and qualities of observation features in a construction site setting. This simulation is interfaced with neural network models trained offline via the Robot Operating System (ROS), enabling the online testing of imitation learning algorithms in a realistic virtual environment. The experiment results indicate that among all the observation features extracted from the environment, the distance to construction workers is essential for avoiding collisions while navigating to the target positions. Additionally, depth cameras demonstrate cost-efficiency among the tested sensors, marginally outperforming the baseline derived from full position mode observation features. This work provides valuable insights for researchers developing learning from demonstration (LfD) algorithms and for robotics engineers seeking to optimize sensor selection, data quality, and state representation for autonomous agents.
Bio
Image
![]() | Yilong Chen is a Ph.D. student in the Robotics & Intelligent Construction Automation Lab (RICAL) at Georgia Tech. He was once an OMSCS student. His research interests are in construction automation, computer vision, and robotics. He is working on and looking forward to combining computer science knowledge with building construction know-how to make construction engineering more secure, more productive, and more sustainable. |
Program
Check out the Program page for the full program!
Questions About the Conference?
Check out our FAQ page for answers and contact information!