2026 OMSCS Conference Poster & Demo Session
2nd Floor Hallways
Posters
Beyond the Hype: Building Trustworthy and Effective LLM Solutions in the Real World
Abstract
The rapid rise of Large Language Models (LLMs) like GPT-4 and Gemini has transformed how we approach problem-solving in data science, analytics, and business intelligence. Yet, many real-world implementations fall short due to blind reliance and poor understanding of how these models actually work. This talk draws from research-backed insights and a masterclass I led for AI and ML leaders on how to responsibly harness LLMs using both internal and external data. We’ll explore landmark findings such as Lost in the Middle (why models lose information mid-context), Google’s Needle in a Haystack test (how to evaluate true recall in long inputs), Retrieval-Augmented Generation vs. Long Contexts (why smarter retrieval often beats bigger models), and the Netflix cosine similarity study (why embedding similarity isn’t always meaningful). Through these examples, attendees will learn to identify when and why LLMs memorize, hallucinate, or misinterpret context—and how to design safeguards to mitigate these behaviors. The session emphasizes practical frameworks for responsible model usage, evaluation, and integration into enterprise workflows. Attendees will leave with a clearer, research-grounded understanding of LLM strengths and limitations, along with actionable strategies to build GenAI systems that are accurate, ethical, and business-ready—bridging the gap between academic insight and applied AI innovation.
Presenters: Rahul Aggarwal and Yash Dosi
Bios coming soon
Evolution of Artificial Intelligence in Unmanned Aerial Vehicular Networks
Abstract
Our poster examines the transformative integration of Generative Artificial Intelligence and Machine Learning into autonomous Unmanned Aerial Vehicle systems, focusing on three critical areas: network optimization, security enhancement, and intelligent maintenance automation. The research synthesizes recent advancements that are fundamentally reshaping how UAV networks operate across various industrial sectors. The poster will cover several interconnected themes. First, we explore how GenAI and ML optimize UAV communication networks and enhance edge computing capabilities for real-time processing. Second, we examine advanced security paradigms, including reinforcement learning algorithms and adversarial multi-agent frameworks designed to combat sophisticated cyber threats. Third, we present automated maintenance solutions, particularly ML-driven aircraft surface defect detection systems that enable autonomous classification and predictive maintenance. Additionally, we address practical deployment considerations, computational constraints, and real-world implementation challenges across industries. Finally, we discuss ethical implications of autonomous decision-making and identify emerging research trajectories in this rapidly evolving field. Attendees will gain a comprehensive overview of state-of-the-art GenAI and ML applications in UAV systems, along with practical insights into deployment challenges and implementation strategies. They will understand how these technologies effectively address communication efficiency, threat mitigation, and maintenance automation while becoming aware of the ethical considerations surrounding autonomous systems. By synthesizing knowledge from recent literature, our poster provides valuable foundation for researchers, practitioners, and policymakers interested in next-generation autonomous technologies, bridging theoretical advances with practical applications.
Presenters: Luke Buckner and Dustin Webb
Bios coming soon
High-Dimensional and Bayesian-Driven Enhancements to Ward's Hierarchical and Lloyd's K-Means Clustering
Abstract
The overall topic of my poster concerns different clustering algorithms and how we can apply lemmas and models in various areas of mathematics, specifically High-Dimensional Probability and Bayesian Scientific Computing, to optimize the algorithm's complexity and enforce some restrictions (priors) to filter outliers. The main points of my poster will include both an introduction to Ward's (Hierarchical) and Lloyd's (K-means) classical algorithms, as well as their shortcomings. Then, I will explain how Ward's complexity could be decreased via the logarithmic dimension reduction of the Johnson–Lindenstrauss lemma. Then, I would explain how we can combat Lloyd's sensitivity to the initialization position by constructing a Bayesian model containing encoded priors, and how we can address the inherent ill-posedness of the problem by replacing the Maximum Likelihood (ML) estimator with a MAP regularizer. Additionally, my research included the implementation of both optimized algorithms. A summary of performance in differing use cases is included, thus highlighting how the customization of classical algorithms is tailored to more specific scenarios. I sincerely believe that this presentation benefits the attendees by exposing them to how these well-known algorithms, which are often approached at a high level through the lens of computer science, can be further optimized and fitted to differing use cases by utilizing the underlying mathematics.
Presenter: Idan Davidovich
Bio coming soon
Determining Railroad Crossing Delay Predictability Using Inference Data Driven Machine Learning Methods in Complex Geographical Environments
Abstract
Charleston, South Carolina, a city populated with approximately 869,000 people, consists of many different geographical features such as coastal uplands, near-shore islands, marshes, barrier islands and beaches, and riverine watersheds, waterways, and wetland making it a topologically complex region. Considering the many busy ports within Charleston, a large contributor to the economy is freight traffic. This traffic leads to delays in transportation due to rail effecting the transportation times of South Carolina residents daily with delays averaging 6.5 minutes from 2022-2023. Railroad blockages occurred on average 4.5 times a day in 2023 and 6.7 times a day in 2022. These delays impact the quality of life for residents by delaying transportation during emergencies and daily commutes. If duration of blockages and likelihood of blockages can be confidently predicted, then delays could be communicated to residents and first responders to better optimize travel plans. Data collection for railroad crossings is currently limited to only 7 pilot sensors. Considering the limited data, we will additionally utilize inferential data such as time features, recent blockages, climate, and port traffic data to train a machine learning model. If predictability can be confidently determined after model validation, potential plans for sensor expansion could be greenlit and predictions will be communicated to residents on dynamic message signs. Attendees will learn about real world applications of machine learning models and how they can benefit society.
Presenters: Isaac Felix and Leonardo Savasta
Bios coming soon
AEGIS: Parametric Modeling of Spinal Curvature Using B-Splines and Inverse Biomechanical Analysis of Spinal Loading
Abstract
AEGIS (Analytical Estimation of Geometric Integrity and Stress) is a novel computational framework for non-invasive estimation of spinal biomechanical stress using posture-derived curvature models. AEGIS offers a new way to assess spinal biomechanics without the need for X-rays or specialized imaging. Using beam theory and anthropometric scaling, AEGIS estimates segmental bending moments and compressive stresses along the spinal column. The poster outlines the mathematical foundations of the model, data processing pipeline, and validation against benchmark biomechanical datasets. It also demonstrates the scalability and interpretability of AEGIS for applications in ergonomics, rehabilitation, and telehealth monitoring. Attendees will gain insight into a new, accessible approach to spinal biomechanics that bridges engineering precision with clinical applicability. They will understand how geometric modeling and inverse dynamics can yield meaningful stress estimations from simple posture data, enabling early detection of spinal overload or poor posture. This framework opens opportunities for cost-effective, population-scale spinal health assessment.
Presenter: Raja Giddi
Bio coming soon
QNA: A Quantum Algorithmic Framework for Modeling Alternative Splicing and Drug Modulation
Abstract
Alternative splicing enables a single gene to produce multiple mRNA isoforms, but the number of possible splice combinations grows exponentially with exon count making comprehensive modeling computationally intractable. QNA (Quantum Nucleic Algorithm) introduces a novel framework that applies quantum computation to simulate and analyze this combinatorial landscape. Main points covered would be how QNA represents exon inclusion and exclusion as qubit states, allowing all possible isoform combinations to exist in quantum superposition. Biological constraints such as frame preservation and mutually exclusive exons are encoded as quantum oracles, while drug or regulatory effects are modeled as unitary operators that perturb these amplitude distributions. Using Grover’s search and amplitude estimation, QNA efficiently identifies biologically valid isoforms and quantifies splicing outcome changes (ΔPSI) with a theoretical speedup over classical methods. Through toy-scale quantum simulations, the framework demonstrates how quantum interference can capture and predict complex splicing behaviors in a way classical computation cannot. Attendees will gain insight into how quantum principles superposition, interference, and phase modulation can serve as computational analogs for biological regulation. QNA bridges quantum information theory and computational genomics, presenting a new paradigm for modeling gene expression and therapeutic modulation at the quantum level.
Presenter: Raja Giddi
Bio coming soon
From Early Depression Detection to Broader Mental Health Modeling: Insights from the eRisk Challenges at CLEF 2025–2026
Abstract
Mental health remains one of the most pressing global challenges, particularly as social interactions and self-expression increasingly occur online. Early detection and intervention are essential for preventing severe outcomes and promoting well-being. This poster highlights recent progress in applying natural language processing (NLP) and large language models (LLMs) to the detection and understanding of mental health signals in online communication, with a focus on the eRisk challenges at CLEF 2025 and 2026. The 2025 eRisk challenge centered on conversational early detection of depression, emphasizing the role of contextual cues, temporal patterns, and user language dynamics in identifying early signs of distress. Methodological advances and evaluation findings illustrate both the promise and limitations of current state-of-the-art NLP systems when applied to sensitive, real-world mental health data. Looking toward eRisk 2026, the scope expands to include ADHD prediction from large-scale social media datasets and human–AI interactions with simulated depressed users. These tasks underscore the importance of responsible AI design, interpretability, and reproducibility in mental health modeling. Attendees will gain insights into how modern NLP and LLM-based approaches can be adapted for early risk detection, how to evaluate such systems ethically, and how shared benchmarking efforts like eRisk contribute to the broader goal of advancing socially beneficial AI research.
Presenter: David Guecha Ahumada
Bio coming soon
Benchmark for Fine-Grain Object Detection of the Hume’s Leaf Warbler with CameraTraps
Abstract
Despite recent advances in computer vision, object detection in wildlife camera trap data remains a significant challenge due to fine-grained targets, environmental camouflage, motion blur, and high false positive camera triggering. We introduce HLW2025, a challenge dataset focused on the Hume’s leaf warbler, consisting of over 3,000 annotated frames from videos captured in the dense forests of Northern India. HLW2025 targets failure scenarios for state-of-the-art models, including extreme occlusion, fine-grained detection, and motion blur challenges common in small bird monitoring. Evaluation across state of the art object detection models reveals significant performance variation in these challenging conditions, with the Swin Transformer achieving the highest mAP@[0.5:0.95] of 0.567. The HLW2025 dataset exposes key limitations in current methods and underscores the need for domain-adapted approaches for fine-grained wildlife detection with camera traps, particularly highlighting that generic architectures may be poorly suited for single-class small object detection in cluttered natural environments. Attendees will learn how to do asynchronous collaboration to solve challenging AI and ML problems.
Presenter: James Hennessy and Kaushika Mohan
Bio coming soon
Personalizing Teaching through Instructor-AI Collaboration: The SAMI Dashboard
Abstract
The Social Agent Mediated Interactions (SAMI) Dashboard is an instructor-AI collaboration tool created to aid instructors in online learning environments through personalized, data-informed insights. The SAMI Dashboard complements the SAMI conversational AI tool, a chatbot that encourages peer interaction in discussion forums by matching students through commonalities such as shared interests. While SAMI engages learners directly, the SAMI Dashboard supports instructors by transforming complex learner data on engagement, collaboration, and sentiment into actionable visualizations. The poster will illustrate the iterative development process of the dashboard across multiple semesters, including user studies consisting of six one hour interviews with instructors and teaching assistants. These studies informed the refinement of the user interface, visualization presentation, and the integration of a large language model (LLM) interface for adaptive personalization. The LLM recommendation layer empowers teachers to customize their interventions with at-risk students and recognize exemplary learners. Attendees will learn about how the SAMI Dashboard utilizes a bidirectional feedback loop between teachers and AI. To accomplish this, the tool receives instructors' guidance for the dashboard's personalization while simultaneously augmenting teachers’ awareness through synthesized data presented by the AI agent. The poster will contribute to the present dialogue in human-AI collaboration and personalized learning analytics. This provides a path toward more empathetic, context-aware AI support for education. Participants will gain both theoretical and practical perspectives on AI tool design that aligns with instructor workflows and enhances decisions for MOOCs (Massive Online Open Courses).
Presenter: Darby Hudnall
Bio coming soon
From OMSCS Theory to Practice: Building an Autonomous AI Cloud Engineer Using MCP & Serverless
Abstract
This talk explores how concepts from OMSCS, particularly in courses focused on AI reasoning, agent design, and scalable systems can be translated into practical engineering by building an autonomous “AI Cloud Engineer” using the Model Context Protocol (MCP) and serverless infrastructure. The core idea is to move beyond conversational AI and demonstrate how agents can observe cloud environments, reason about system state, and take safe, controlled actions. The system integrates MCP with AWS Lambda and cloud monitoring to enable an AI agent to inspect logs and metrics, detect issues, suggest remediation steps, execute approved actions such as scaling, and verify results. The project applies OMSCS learnings in knowledge-based AI, autonomous systems, and cloud architecture to the real-world challenge of intelligent operations. Key points include agent reasoning workflows, tool-driven action execution, permission boundaries and guardrails for safe autonomy, and design lessons learned while evolving the prototype. The talk will also briefly show how OMSCS coursework informed decisions in planning, representation, and error handling. Attendees will gain insight into building applied agentic systems, understand the opportunities and risks of autonomous cloud operations, and learn practical patterns for connecting AI tools to real infrastructure in a responsible and testable way. This session is intended to inspire OMSCS students to leverage academic foundations to create impactful, hands-on AI systems and contribute to the emerging field of autonomous cloud engineering.
Presenter: Vinod Kumar
Bio coming soon
Cognitive Underpinnings of Movie Genre Preferences: Insights for Personalized AI Systems
Abstract
Movies are among the most ubiquitous forms of entertainment, yet the cognitive factors shaping individual film preferences remain underexplored. This study examines how demographic variables and personality traits influence movie genre inclination through the lens of cognitive science. Using a mixed-methods approach combining a quantitative survey of 145 participants and 11 qualitative interviews, the research investigates the decision-making processes underlying why individuals prefer certain film genres over others. Results indicate that comedy is the most universally favored genre across genders and age groups, while horror consistently ranks lowest. Men show a marked preference for action films, whereas women tend to favor romance. Younger participants gravitate toward animation and fantasy, and individuals from STEM backgrounds exhibit stronger interest in science fiction. Introverts were found to prefer thrillers more than extroverts, while preferences for specific actors and visual imagery further shaped viewing choices. The study interprets these findings through constructs from the Computational-Representational Understanding of Mind (CRUM) theory, explaining how mental representations and social factors jointly influence entertainment preferences. The results have practical applications in mental health interventions (e.g., cinematherapy), media production and education, where film-based narratives can enhance engagement. Furthermore, these insights have implications for AI personalization and recommendation systems. Cognitive models of preference can inform how intelligent systems like LLMs interpret human intent, emotion and context which can lead to more empathetic, human-aligned personalization. Overall, this study underscores how cognitive science principles can inform the design of AI systems that more effectively understand, adapt to, and empathize with users.
Presenter: Rohan Limaye
Bio coming soon
What We Measure When We Measure Computational Creativity
Abstract
This poster surveys current approaches to evaluating creativity in large-language models and addresses gaps and issues found in current approaches. A recommended framework for evaluation is proposed by applying psychological and philosophical accounts of creativity to measurement design.
Bio
Image
| Maritza Mills is an OMSCS alum and current graduate student of philosophy at the University of South Carolina. |
The Science of Learning in the Information Age
Abstract
This poster explores how principles from the Science of Learning (SoL) can be used to design digital learning experiences that surpass traditional education. As technology reshapes how students access and engage with knowledge, simply transferring lectures to screens is not enough. Curriculum must be intentionally structured to develop reasoning, problem-solving, and long-term understanding. The poster highlights core SoL principles—Active Learning, Spaced Practice, Metacognition, Transfer of Learning, and Feedback Loops—and illustrates how each can be embedded into digital platforms. Examples include adaptive practice that strengthens memory, interactive AI tutors that prompt students to explain thinking, and project-based challenges that encourage applying knowledge in new contexts. This work is informed by my own academic background: I completed my undergraduate degree at a university explicitly designed around the Science of Learning, where courses emphasized interconnected skills, reflection, and real-world application. Additionally, my CS6460 project focused on designing an AI-supported “learn-by-teaching” platform grounded in SoL principles, further shaping the design framework presented here. Attendees will learn how digital environments, when guided by the science of learning, can provide scalable personalization, continuous formative feedback, and more meaningful skill development than one-size-fits-all traditional educational models. They will leave with practical design strategies to improve courses, platforms, and digital learning experiences.
Presenter: Ara Mkhoyan
Bio coming soon
The Science Behind Face Verification: A Modern Methods Perspective
Abstract
Face verification is a critical component of modern identity authentication, supporting applications from secure logins to digital onboarding. This presentation explores how computer vision and machine learning facilitate accurate face detection and matching in real-world scenarios. It begins with an overview of the image capture process for government IDs and selfies, highlighting techniques such as document boundary detection, illumination correction, and facial region extraction. The session then explores how deep learning models represent facial features as embeddings and perform similarity-based matching. Building on existing approaches, this talk introduces an experimental method to enhance the reliability of face matching between printed ID photos and live selfies. The proposed system employs domain-aware preprocessing and dual-encoder learning to address inconsistencies caused by lighting, image quality, and print artifacts. The presentation concludes with a discussion of current challenges in face verification, including liveness detection, data bias. Attendees will gain insight into how classical computer vision and deep learning converge to create robust, ethical, and efficient face verification systems suited for real-world identity validation.
Presenter: Karthik Nagesh
Bio coming soon
Bridging Empirical and Modeled Biodiversity: Evaluating Community Composition and Stability Using Snapshot USA and IUCN Data
Abstract
Overall Topic: Across the United States, hundreds of camera traps capture images of mammals each year through the Snapshot USA (SSUSA) project. The IUCN (International Union for Conservation of Nature) provides globally standardized range maps through its Red List, based largely on expert opinion and historical data. While these maps serve as the global standard for species distributions, they are often coarse in resolution and may not reflect fine-scale habitat fragmentation, recent range shifts, or local absences. This study asks a central question: how well do these two views of biodiversity—empirical and modeled—agree? By comparing species observed in Snapshot USA data with those predicted by IUCN maps, we evaluate where our understanding aligns or diverges. Using multi-year, we also explore how empirical sampling affects the stability and completeness of community definitions. Why It Matters: This study helps reveal whether global biodiversity maps still match real-world data. By linking predicted and observed patterns, we can find where our knowledge is out of date, figure out how much real data we need to trust our picture of a community, and improve conservation planning. These comparisons help us notice when animals disappear, move into new places because of climate change or habitat loss. Main Points: 1. Compare empirical and predicted biodiversity. 2. Assess data completeness and community stability. 3. Detect ecological change. What Attendees Will Learn: Using biodiversity data, this talk shows how real evidence challenges old assumptions, how missing data mislead decisions, and how clear comparisons uncover the true patterns behind complex systems.
Presenter: Neelima Pandey and Kefei Yan
Bio coming soon
Making AI Easier to Think With: Reducing Cognitive Load in AI-Assisted Decisions
Abstract
My poster, “Making AI Easier to Think With: Reducing Cognitive Load in AI-Assisted Decisions” looks at how artificial intelligence tools often make people work harder than they should. Systems like chat assistants, copilots, or recommender dashboards generate pages of information, options, and explanations but the result can be cognitive overload instead of clarity. The focus is on how small design and communication choices can reduce that mental effort and help people reach decisions more confidently. The poster highlights a few simple patterns that improve understanding: show key recommendations first, group related details together and let users choose when they want to see more depth. It also discusses how visual hierarchy and timing such as short summaries before long explanations can make complex AI outputs easier to process. Rather than proposing new algorithms, this work looks at the human side of AI: how information is presented and how that affects decision quality. Attendees will leave with a clearer sense of why “more detail” is not always better and how thoughtful presentation can make AI tools feel more intuitive and trustworthy. The ideas are practical enough for designers, engineers, and anyone who builds or uses AI systems every day.
Presenter: Anvi Patel
Bio coming soon
Human–AI Interaction with Agentic Coding Tools
Abstract
This poster explores human-AI collaboration in agentic coding environments, focusing on tools such as Cursor, Cline, Claude Code, and Letta Code. It examines how developers use AI agents during coding tasks, investigating patterns of interaction, task delegation, and code integration. Developers interact with AI agents through patterns such as directive prompting, iterative refinement, and negotiated debugging. The study highlights how users decide when to delegate tasks to AI, when to override suggestions, and how they incorporate AI-generated results into their codebase. These collaboration patterns are shaped by the agentic capabilities of the tool, including its ability to proactively suggest solutions, handle multi step tasks, and maintain project wide context, which in turn influence task completion, confidence, and reliance on AI. Beyond performance, I will also examine whether developers find using these AI tools engaging and enjoyable, and how positive experiences influence adoption and continued use. These insights highlight patterns of human–AI collaboration and inform the design of AI-assisted coding environments that are efficient, flexible, and satisfying to use. Attendees will gain an understanding of how developers use agentic AI coding tools in practice, including common patterns of collaboration and task delegation. The poster also highlights insights into design considerations for AI coding environments, balancing AI suggestions and human oversight, productivity, and user experience. By exploring both the practical and experiential aspects of human-AI collaboration, attendees will learn how AI tools can be designed to support developers in ways that are efficient, flexible, and engaging.
Presenter: Shelley Pham
Bio coming soon
AI-Driven Framework to Mitigate Colorism and Promote Inclusive Representation for Women Across Asia
Abstract
This poster will present an AI-driven inclusivity framework that aims to address how colorism, which is deeply embedded in Asian societies and traditional mindsets, continues to marginalize women with darker skin tones in corporate, educational, and entertainment sectors. These cultural biases, long reinforced by beauty standards and media portrayals, are now being replicated and amplified by artificial intelligence systems such as beauty filters, hiring tools, and digital marketing algorithms trained on imbalanced datasets. The proposed study will employ a mixed-method, data-driven approach integrating computer vision, natural language processing (NLP), and human-centered evaluation. It will combine publicly available image datasets (Fitzpatrick17k, FairFace, DiverseSkinTone) with multilingual Asian text corpora (English, Bengali, Hindi, Korean, Tagalog, Japanese). Baseline deep learning models (ResNet-50, EfficientNet-B4) will be evaluated across skin-tone categories using fairness metrics such as statistical parity difference and equal opportunity ratio. To reduce bias, we will adjust the data, use techniques to counteract bias, and apply BERT-based sentiment analysis Additionally, focus groups with Asian women professionals will be conducted to interpret model behavior and co-develop ethical guidelines. Attendees will learn how fairness-aware AI design, cross-cultural datasets, and participatory validation can transform technology into a means of dismantling colorist hierarchies. This research will seek to advance algorithmic fairness and representational justice, envisioning an inclusive digital future where women of all skin tones are represented with dignity across Asia.
Presenter: Rifat Kabir Sharna
Bio coming soon
Modeling Preference Shifts via Reinforcement Learning in Semantic Spaces
Abstract
This poster will showcase an exploration into how reinforcement learning algorithms in recommendation systems can influence user interests in semantic spaces. Given that social media platforms focus on optimizing for engagement metrics, such as what is recommended and interacted with, the mechanisms and their effects behind this seems underexplored. I believe this will be a work in progress during my time of submission but what I hope to do is represent users and content as vectors within a semantic space, and have RL agents for simulation act as users to see optimization of recommended content and see how their position tracks. I would also like to see if within the controlled environment what can be done to shift an agent from A to B. My hopes for this project and for attendees is to bring forth attention and insight into how recommendation systems within social media can nudge user interests from A to B, and bring attention to the ethical issues surrounding algorithmic influence and the potential it holds for manipulation. This aims to touch on fields of AI, HCI, and ML.
Presenter: Aman Singh
Bio coming soon
The Art of Procrastination: Personalized Cognitive Modeling for Goal-Aware Learning
Abstract
The poster examines how personalized cognitive modeling can help understand and reduce procrastination in adult learning. By integrating cognitive science, psychology, and AI, the project explores how emotional, motivational, and behavioral factors influence goal-oriented learning and self-regulation. Main Points: (1) Building a personalized cognitive model for goal-aware learning, identifying how learners set, achieve, and reflect on goals (2) Understanding procrastination comprehensivly as a cognitive and emotional process rather than only delay, distinguishing helpful versus harmful forms (3) Designing an interactive AI tool that visualizes learning timelines, tracks progress, and offers adaptive interventions and feedback Attendees will learn the theory deign of cognitive models and how that can be applied to understand and design data-driven, personalized learning strategies behind goal-aware learning and procrastination, in an effort to build an intelligent learning system. Additionally, they will see how the author changed role from a procrastinator to an explorer and solution designer of procrastionation through research at OMSCS.
Presenter: Sareen Zhang
Bio coming soon
Demos
NAHPU - A NAtural History Project Utility for Cataloging Specimens and Field Expeditions
Abstract
I will be demonstrating NAHPU - NAtural History Project Utility. This is a multi-platform app I have worked on as a part of the HAAG research group at GT during the Fall 2025 semester. The app is used by both field researchers to catalog specimen data that they collect while on expeditions (often in low power, no internet locations) and natural history museum curators as a tool for reviewing, digitizing, and organizing specimen collections. I would structure this demonstration as a hands-on experience, where conference attendees can use the app and view example specimen data across several different devices (phone, tablet, and laptop). I will also display a brief presentation about the app itself and our expected publication related to the app (likely via a poster). By the time of conference attendance, the app will have been widely made available on various app stores and through the app's website -https://www.nahpu.app/. It incorporates several OMSCS-relevant topics, such as multi-platform user interfaces, user experiences in limited connectivity spaces, and some research work that comes at the intersection of bioinformatics and computer science via computer vision, machine learning, and on-device, low power AI usage.
Presenter: Cody Henderson
Bio coming soon
Simulating the Impact of AI Coding Tools on Software Developer’s Work and Psychology (virtual)
Abstract
This project demonstrates the cognitive and psychological effect of LLM powered coding tools on developers work ethics and skills. While these tools can enhance confidence and curiosity by helping in complex problem solving and decision making, they may also deskill the developer and induce fear of replacement. The project uses a multi agent system that simulates the human AI collaboration over the coding tasks and compare scenarios ranging from no-AI baselines to full AI autonomy, measuring outcomes such as productivity, trust, and human skill levels over time. Demonstration will have a web application (MVP) that uses an agent based computational model in multi agent system. It simulates interactions between software developers (human agents) and LLM coding tools (AI agents). The simulation will allow varying collaboration scenarios (no-AI, assistive AI, full autonomy) and track outcomes like productivity, trust, and skill change. It will be implemented in Python (using Mesa for ABM, NumPy for computation, Matplotlib for visualization, Flask for API and User Interface), the tool will produce graphs and dashboards showing human–AI boundary shifts over time based on task complexity. It is a quantitative tool that can help organizations/researchers to balance human–AI collaboration over software development process. Findings aim to provide deeper insights into whether LLMs support upskilling or risk deskilling and offering guidance on balanced adoption of Human-AI collaboration.
Presenter: Reena Kamra
Bio coming soon
OMSCS Compass: Alumni-Driven Course and Career Recommendation System
Abstract
OMSCS Compass will be an interactive web-based analytics platform that will leverage alumni feedback to guide current and prospective OMSCS students in selecting courses and planning their academic journey. The system will aggregate alumni survey data and publicly available course reviews to uncover relationships between course choices, perceived workload, and real-world career outcomes. The demonstration will showcase a live, interactive dashboard where attendees can explore alumni insights through visualizations such as career outcome heatmaps, course sequence recommendations, and sentiment-based workload charts. Users will be able to input their own preferences—such as career goal, available weekly hours, or desired difficulty—and receive personalized course pathway recommendations derived from NLP and clustering models trained on alumni feedback. The session will be structured as a guided walk-through followed by interactive exploration, allowing attendees to see how the platform dynamically adapts recommendations based on user inputs and alumni data trends. Attendees will benefit by understanding how data-driven decision support can enhance academic planning in large-scale online programs. They will also learn how machine learning and natural language processing can transform unstructured alumni experiences into actionable insights for course and career planning. Ultimately, OMSCS Compass aims to foster a more informed, community-driven approach to navigating the OMSCS journey.
Presenter: Esha Mahendra
Bio coming soon
Agility: A VR Game-Based Clinical Assessment for Parkinson’s Disease
Abstract
Parkinson’s disease progressively erodes both motor and cognitive ease, yet traditional assessments capture only brief clinic visits—missing the subtle, day-to-day progress that defines real quality of life. Agility is a seated, hands-first virtual reality chess game that turns routine play into clinically meaningful data. Each move activates a short, gesture-based challenge modeled on MDS-UPDRS Part III hand items—finger tapping, pronation–supination, movement smoothness, and postural steadiness. As players interact, the system unobtrusively collects fine-grained telemetry on dexterity, tremor amplitude, and control precision, translating them into clear, clinician-readable summaries. Developed through Unity XR with an emphasis on accessibility and comfort for older adults, Agility merges evidence-based motor assessment with the motivational power of play. The talk outlines its design pipeline, adaptive difficulty logic, and telemetry modeling, as well as findings from early peer usability studies. Attendees will gain insight into how serious-game principles, sensor analytics, and human-centered design can converge to create engaging digital health tools that support both self-efficacy and clinical interpretation. Ultimately, Agility reframes VR gaming as a quiet, empowering form of assessment—one that restores confidence in motion while deepening understanding of Parkinson’s in everyday life.
Presenter: Sandra Nguyen
Bio coming soon
Yoshi: The Blame Game
Abstract
I propose to demonstrate Yoshi: The Blame Game, a 3D interactive game developed by our team for “CS 6457: Video Game Design”. It combines narrative storytelling, environmental puzzles, and AI-driven animal behaviors to create an engaging learning experience in game design and interactivity. The game follows Yoshi, a loyal dog navigating a household filled with playful chaos caused by a mischievous cat while trying to complete tasks and earn the owner’s trust. During the demonstration, I would like to showcase the Intro and the Tutorial Level, followed by a live walkthrough of the Main Level, which includes different playable objectives. I will briefly explain the game’s underlying mechanics—such as player control systems, menus and hint UI, and object interaction scripts, while highlighting our use of Unity software for character movement and environmental interaction. The session will be fully interactive: after my initial live demo, attendees will be invited to play the game themselves, experiencing how user inputs, in-game hints, and task logic combine to create a smooth and intuitive learning curve. Through this demonstration, participants will gain insight into game design structure, UI integration, and cross-script interaction in Unity. They will also experience how narrative motivation and feedback systems enhance player engagement, which is likely to bridge creativity, coding, and user experience in an accessible, educational way.
Presenter: Rifat Kabir Sharna
Bio coming soon
Program
Check out the Program page for the full program!
Questions About the Conference?
Check out our FAQ page for answers and contact information!