2026 OMSCS Conference Poster & Demo Session

Tuesday, May 12, 10:30-12:10 a.m.
2nd Floor Hallways

Posters

Beyond the Hype: Building Trustworthy and Effective LLM Solutions in the Real World

Abstract

Large Language Models are reshaping how organizations approach automation, analytics, and decision support. Yet as adoption accelerates, many real-world implementations reveal predictable weaknesses such as hallucinations, brittle context handling, evaluation blind spots, and security vulnerabilities. These behavioral patterns matter not only for standalone LLM applications, but also for agentic AI systems where the LLM serves as the core reasoning engine. This poster session presents a curated synthesis of influential academic and industry studies that examine how LLMs behave under stress. Rather than introducing new experiments, we translate findings from work on long-context degradation, retrieval versus scaling trade-offs, inference variability, prompt injection, and evaluation design into a coherent practitioner-focused lens. The goal is to help builders move from intuition to informed design. What does long-context failure mean for complex workflows? When does retrieval outperform larger models? Why do certain evaluation setups create false confidence? How do reasoning inconsistencies propagate inside multi-step or agent-driven systems? Attendees will leave with a structured understanding of common LLM failure modes, practical design principles to mitigate them, and a clearer framework for building reliable, secure, and production-ready LLM or agentic AI solutions grounded in evidence rather than hype.

Bios

Image
Rahul Aggarwal
Rahul Aggarwal is a Senior Data Scientist at Commonwealth Bank of Australia with over 12 years of experience building production AI across healthcare, banking, energy, and telecom. He has led enterprise NLP and agentic AI initiatives that convert complex, unstructured data into measurable business outcomes. A primary inventor on multiple patent filings with USPTO, he focuses on deploying AI systems that perform reliably at scale. He also contributes to the broader AI community as a Kaggle Notebook Expert and invited AI/ML guest faculty with MIT and JHU.
Image
Yash Dosi
Yash Dosi is an Engineering Manager at Adobe with over 12 years of experience building scalable software and AI-powered products. He currently leads engineering teams in Bengaluru, driving end-to-end delivery of robust, optimized solutions to complex real-world problems. His prior experience includes software engineering roles at Samsung Research India and founding JEE-Portal, where he gained early entrepreneurial and product-building exposure. He is extremely passionate about bridging the gaps between theory and application.

High-Dimensional and Bayesian-Driven Enhancements to Ward's Hierarchical and Lloyd's K-Means Clustering

Abstract

The overall topic of this project concerns different clustering algorithms and how we can apply lemmas and models in various areas of mathematics, specifically High Dimensional Probability and Bayesian Scientific Computing, to optimize the algorithm’s complexity and enforce some restrictions (priors) to filter outliers. The project includes both an introduction to Ward’s (Hierarchical) and Lloyd’s (K-means) classical algorithms, as well as their shortcomings. Then, it is detailed how Ward’s complexity could be decreased via the logarithmic dimension reduction of the Johnson–Lindenstrauss lemma, and how we can combat Lloyd’s sensitivity to the initialization position by constructing a Bayesian model containing encoded priors, and how we can address the inherent ill-posedness of the problem by with a MAP regularizer and Dirichlet priors. Additionally, this research included the implementation of both optimized algorithms. A summary of performance in differing use cases is included, thus highlighting how the customization of classical algorithms is tailored to more specific scenarios.

Bio

Image
Idan Davidovich
Idan Davidovich is pursuing the Machine Learning specialization of the OMSCS program in conjunction with his graduate applied mathematics studies.

Determining Railroad Crossing Delay Predictability Using Inference Data Driven Machine Learning Methods in Complex Geographical Environments

Abstract

Charleston, South Carolina, a city populated with approximately 869,000 people, consists of many different ecosystems such as coastal uplands, near-shore islands, marshes, barrier islands and beaches, and riverine watersheds, waterways, and wetland making it a topologically complex region. Considering the coastal location of Charleston, a large contributor to the economy is freight traffic. This traffic leads to delays in transportation due to rail and drawbridge blockages, effecting the transportation times of South Carolina residents daily with delays averaging 6.5 minutes from 2022-2023. Drawbridge blockages occurred on average 0.5 times a day in both 2022-2023. Railroad blockages occurred on average 4.5 times a day in 2023 and 6.7 times a day in 2022. These delays impact the quality of life for residents by delaying transportation during emergencies and daily commutes. If these delays can be confidently predicted, then delays could be communicated to residents and first responders to better optimize travel plans. Data collection for railroad crossings is currently limited to only 7 pilot sensors. Considering the limited data, we will additionally utilize inferential data such as bus delay, climate, and port traffic data to train a machine learning model. If predictability can be confidently determined after model validation, potential plans for sensor expansion could be greenlit and predictions will be communicated to residents on dynamic message signs. Attendees will learn about real world applications of machine learning models and how they can benefit society.

Bios

Image
Isaac Felix
Isaac Felix is a Georgia Tech MSCS student and Machine Learning Researcher dedicated to creating human-centered systems. Alongside his full-time role as a Quantitative Analyst, his work explores how advanced computing can safely and equitably augment human capabilities. This core focus encompasses his distinct experiences across civic AI policy and ACM-published research in Human-Robot Interaction.
Image
Leo Savasta
Leo Savasta is a Senior Data Scientist in financial services where he develops Machine Learning models for credit risk prediction and assessment. He is pursuing an MSCS degree with a focus in Machine Learning at Georgia Tech. He is passionate about statistically grounded interpretable models for risk and decision making, including applications to transportation systems, and he is interested in domains spanning policy, economics, and healthcare.

Early Risk Detection on the Internet: Mental Health Modeling in the CLEF eRisk 2025–2026 Challenges

Abstract

Mental health remains one of the most pressing global challenges, particularly as social interaction and self-expression increasingly occur online. Early risk detection systems aim to identify vulnerable individuals before crises escalate, offering opportunities for timely intervention. This poster presents the participation of the Data Science @ Georgia Tech Applied Research Competitions (DSGT ARC) team in the CLEF eRisk 2025–2026 lab, which explores evaluation methodologies and system designs for early risk detection on the Internet. We reflect on our work in the 2025 challenge on conversational early detection of depression and present ongoing advances for 2026. This year’s participation spans two complementary tasks. The first involves interacting with a large language model (LLM) prompted to emulate a depressed persona and automatically assessing depression severity according to the Beck Depression Inventory-II (BDI-II) criteria. The second focuses on ranking Reddit documents relevant to ADHD symptoms based on the ASRS v1.1 questionnaire, requiring large-scale retrieval over more than four million posts. Our approach investigates a spectrum of modern information retrieval and NLP methodologies, ranging from sparse retrieval models such as BM25 to dense semantic similarity and embedding-based systems, as well as prompt engineering and agentic LLM pipelines. By systematically comparing retrieval paradigms and integrating structured LLM-based reasoning components, we examine trade-offs between interpretability, scalability, computational cost, and predictive performance. Through shared benchmarking in eRisk, we aim to advance responsible, reproducible, and socially beneficial AI systems for early mental health risk detection, while contributing practical insights for both research and real-world deployment contexts.

Bio

Image
David Guecha
David Guecha is an OMSCS graduate student at Georgia Tech and team lead for DSGT ARC’s participation in the CLEF eRisk lab. His research explores early mental health risk detection using NLP, information retrieval, and large language models. He is particularly interested in building responsible AI systems that support social good and real-world health applications.

Benchmark for Fine-Grain Object Detection of the Hume’s Leaf Warbler with CameraTraps

Abstract

Despite recent advances in computer vision, object detection in wildlife camera trap data remains a significant chal- lenge due to fine-grained targets, environmental camouflage, motion blur, and high false positive camera triggering. We introduce HLW2025, a challenge dataset focused on the Hume’s leaf warbler, consisting of over 3,000 annotated frames from videos cap- tured in the dense forests of Northern India. HLW2025 targets failure scenarios for state-of-the-art models, including extreme occlusion, fine-grained detection, and motion blur challenges common in small bird monitoring. Evaluation across state of the art object detection models reveals significant performance variation in these challenging conditions, with the Swin Transformer achieving the highest mAP@[0.5:0.95] of 0.567. The HLW2025 dataset exposes key limitations in current methods and under- scores the need for domain-adapted approaches for fine-grained wildlife detection with camera traps, particularly highlighting that generic architectures may be poorly suited for single-class small object detection in cluttered natural environments.

Presenters: James Hennessy and Kaushika Mohan

Bios coming soon


Personalizing Teaching through Instructor-AI Collaboration: The SAMI Dashboard

Abstract

Large-scale online learning environments create structural challenges for instructor awareness, as the low-social-cue and asynchronous nature of digital communication reduces visibility into learner engagement and limits the timely identification of disengagement, social isolation, or academic difficulty; restricted interpretive access to behavioral and social signals further constrains early and empathetic intervention. The SAMI Dashboard addresses this visibility gap by translating engagement data from the Social Agent Mediated Interactions (SAMI) tool into pedagogically actionable insight, visualizing participation, collaboration, sentiment, personality, and performance trends at both individual and class levels while linking behavioral, social, affective, and outcome indicators to instructional actions such as targeted outreach and monitoring of at-risk students. Integration with large language models (LLMs) adds an interpretive layer that generates natural-language summaries and recommendations, enabling instructors to personalize instructional methods to student needs through context-sensitive analytic guidance. Conversational querying allows analytic outputs to evolve through instructor input, establishing a human–AI bidirectional feedback loop. Fourteen instructor interviews, a mass survey, and a course-based assignment informed iterative refinement of the system, and observed interactions demonstrated active instructor steering of AI-generated insights, illustrating how the dashboard supports reflective, adaptive teaching practice.

Bio

Image
Darby Hudnall
Darby Hudnall is a graduate research assistant and student at Georgia Tech, specializing in Interactive Intelligence. At DILab, Darby works for the Architecture for Learning (A4L) team. She holds a Bachelor of Science in Mechatronics Engineering from UNC Asheville. Darby currently works in industry as a software development engineer in test (SDET). In her free time, she enjoys volunteering with FIRST Robotics and the Society of Women Engineers (SWE).

Cognitive Underpinnings of Movie Genre Preferences: Insights for Personalized AI Systems

Abstract

Movies are among the most ubiquitous forms of entertainment, yet the cognitive factors shaping individual film preferences remain underexplored. This study examines how demographic variables and personality traits influence movie genre inclination through the lens of cognitive science. Using a mixed-methods approach combining a quantitative survey of 145 participants and 11 qualitative interviews, the research investigates the decision-making processes underlying why individuals prefer certain film genres over others. Results indicate that comedy is the most universally favored genre across genders and age groups, while horror consistently ranks lowest. Men show a marked preference for action films, whereas women tend to favor romance. Younger participants gravitate toward animation and fantasy, and individuals from STEM backgrounds exhibit stronger interest in science fiction. Introverts were found to prefer thrillers more than extroverts, while preferences for specific actors and visual imagery further shaped viewing choices. The study interprets these findings through constructs from the Computational-Representational Understanding of Mind (CRUM) theory, explaining how mental representations and social factors jointly influence entertainment preferences. The results have practical applications in mental health interventions (e.g., cinematherapy), media production and education, where film-based narratives can enhance engagement. Furthermore, these insights have implications for AI personalization and recommendation systems. Cognitive models of preference can inform how intelligent systems like LLMs interpret human intent, emotion and context which can lead to more empathetic, human-aligned personalization. Overall, this study underscores how cognitive science principles can inform the design of AI systems that more effectively understand, adapt to, and empathize with users.

Bio

Image
Rohan Limaye
Rohan Limaye is a Master’s student at Georgia Tech, specializing in AI. His interests span machine learning, NLP, cognitive science, algorithms and operating systems. He is a software engineer at Arista Networks and a research student in the Design Intelligence Lab under Dr. Ashok Goel, contributing to the SAMI project to enhance online learning with AI. He is passionate about building systems that meaningfully improve people’s lives. You can find him here – https://linkedin.com/in/rohan-ryl

What We Measure When We Measure Computational Creativity

Abstract

This poster surveys current approaches to evaluating creativity in large-language models and addresses gaps and issues found in current approaches. A recommended framework for evaluation is proposed by applying psychological and philosophical accounts of creativity to measurement design.

Bio

Image
Maritza Mills
Maritza Mills is an OMSCS alum and current graduate student of philosophy at the University of South Carolina.

The Science of Learning in the Information Age: Designing Digital Experiences That Surpass Traditional Education

Abstract

As technology reshapes how students access and engage with knowledge, simply transferring lectures to screens is insufficient. This poster explores how principles from the Science of Learning (SoL) can be used to design digital learning experiences that surpass traditional education by developing reasoning, problem-solving, and long-term understanding. The presentation highlights five core SoL principles and demonstrates how each can be embedded into digital platforms: Active Learning - Interactive simulations and hands-on environments that require knowledge construction. Spaced Practice - Adaptive algorithms that resurface material at optimal intervals to strengthen retention. Metacognition - AI tutors that prompt students to explain their thinking and reflect on learning strategies. Transfer of Learning - Project-based challenges that encourage applying knowledge across different contexts. Feedback Loops - Instant, specific feedback systems that scaffold understanding rather than simply marking answers correct or incorrect This framework is informed by the presenter's undergraduate education at a university explicitly designed around Science of Learning principles, where courses emphasized interconnected skills, reflection, and real-world application. Additionally, research conducted for CS6460 focused on designing an AI-supported "learn-by-teaching" platform grounded in SoL principles. Poster will help to understand how digital environments, when guided by the science of learning, can provide scalable personalization, continuous formative feedback, and more meaningful skill development than traditional educational models. They will leave with practical design strategies to improve courses, platforms, and digital learning experiences, with particular emphasis on fostering transfer of learning as the ultimate measure of understanding.

Bio

Image
Ara Mkhoyan
Ara Mkhoyan is a software engineer at Viasat (global satellite communications company) and MS Computer Science student in Georgia Tech's OMSCS program. Originally from Armenia, he studied Business and Computer Science at Minerva University, living, working, and studying across seven countries. His interdisciplinary background combines technology, education, and global perspectives. He is passionate about technology in space exploration and designing educational experiences grounded in learning science.

The Science Behind Face Verification: A Modern Methods Perspective

Abstract

Face verification plays a central role in digital identity authentication, enabling applications such as remote onboarding, secure account access, and fraud prevention. In real-world verification systems, matching a printed photograph from a government-issued ID with a live selfie remains particularly challenging due to differences in image quality, lighting conditions, print artifacts, and capture environments.

This poster presents a modern perspective on face verification by combining classical computer vision techniques with deep learning-based facial representation learning. The workflow begins with image acquisition and preprocessing, including document boundary detection, illumination normalization, and facial region extraction from both ID images and selfies. It then examines how facial features are encoded into embeddings and compared using similarity-based matching methods.

Building on this foundation, the poster introduces an experimental verification pipeline designed to improve cross-domain matching between ID photos and selfies. The proposed approach incorporates domain-aware preprocessing and a dual-encoder learning framework to reduce the impact of visual inconsistencies across capture conditions. The poster also highlights ongoing challenges in practical deployment, including liveness detection, fairness, and bias in face recognition systems.

Overall, this work illustrates how traditional image processing and modern deep learning can be integrated to create more reliable, scalable, and ethically aware face verification systems for real-world identity validation.

Presenter: Karthik Nagesh

Bio coming soon


Bridging Empirical and Modeled Biodiversity: Evaluating Community Composition and Stability Using Snapshot USA and IUCN Data

Abstract

Overall Topic: Across the United States, hundreds of camera traps capture images of mammals each year through the Snapshot USA (SSUSA) project. The IUCN (International Union for Conservation of Nature) provides globally standardized range maps through its Red List, based largely on expert opinion and historical data. While these maps serve as the global standard for species distributions, they are often coarse in resolution and may not reflect fine-scale habitat fragmentation, recent range shifts, or local absences. This study asks a central question: how well do these two views of biodiversity—empirical and modeled—agree? By comparing species observed in Snapshot USA data with those predicted by IUCN maps, we evaluate where our understanding aligns or diverges. Using multi-year, we also explore how empirical sampling affects the stability and completeness of community definitions. Why It Matters: This study helps reveal whether global biodiversity maps still match real-world data. By linking predicted and observed patterns, we can find where our knowledge is out of date, figure out how much real data we need to trust our picture of a community, and improve conservation planning. These comparisons help us notice when animals disappear, move into new places because of climate change or habitat loss. Main Points: 1. Compare empirical and predicted biodiversity. 2. Assess data completeness and community stability. 3. Detect ecological change. What Attendees Will Learn:Using biodiversity data, this talk shows how real evidence challenges old assumptions, how missing data mislead decisions, and how clear comparisons uncover the true patterns behind complex systems.

Bios

Image
Neelima Pandey
Neelima Pandey lives in Houston and serves as an Adjunct Computer Science faculty member while pursuing an M.S. in Computer Science (Machine Learning) at Georgia Tech. Her research interests include statistical modeling, machine learning, deep learning, and AI applications in ecology and environmental systems. She brings prior experience in engineering and technology entrepreneurship, which informs her interest in applying machine learning to data-driven real-world challenges.
Image
Kefei Yan
Kefei Yan is an MSCS student at Georgia Tech specializing in Artificial Intelligence. With a background in Biochemistry and Data Science, Kefei currently researches spatial camera trap data with the Human-Augmented Analytics Group (HAAG). His academic focus bridges AI and sustainability, exploring how computational methods can address environmental challenges.

Making AI Easier to Think With: Reducing Cognitive Load in AI-Assisted Decisions

Abstract

AI assistants, copilots and recommender dashboards often generate more information than people can comfortably process turning "help" into cognitive overload. This poster examines the human side of AI-assisted decision-making: how the structure, sequencing, and visual presentation of AI outputs can increase or reduce mental effort. Rather than proposing new algorithms, it synthesizes practical, user-centered patterns that make AI guidance easier to understand and act on - surfacing key recommendations first, grouping related details, and using progressive disclosure so users can decide when to go deeper. It also highlights the role of visual hierarchy and timing, such as short summaries before longer explanations, in making complex outputs easier to process. Finally, the poster connects these choices to perceived trust, confidence, and decision quality - especially when users must compare options, weigh tradeoffs, or justify a final choice. Attendees will leave with concrete design takeaways for presenting AI output so that detail becomes optional rather than overwhelming, and AI tools feel more intuitive, usable, and trustworthy.

Bio

Image
Anvi Patel
Anvi Patel is a front-end software engineer with experience building accessible, high-performance web products using React and TypeScript. She has worked on large-scale, customer-facing web products and is an OMSCS student specializing in Artificial Intelligence. Her current interests sit at the intersection of HCI and AI - especially how interface design can reduce cognitive load and help people make decisions more confidently.

AI-Driven Framework to Mitigate Colorism and Promote Inclusive Representation for Women Across Asia

Abstract

This poster presents an AI-driven inclusivity framework that aims to address how colorism, deeply embedded in Asian societies and traditional mindsets, continues to marginalize women with darker skin tones across corporate, educational, and entertainment sectors. Cultural biases reinforced through beauty standards and media representation are increasingly being replicated and amplified by artificial intelligence systems, including beauty filters, hiring tools, and digital marketing algorithms trained on imbalanced datasets. The proposed study will employ a mixed-method, data-driven approach integrating computer vision, natural language processing (NLP), and human-centered evaluation. Publicly available image datasets (Fitzpatrick17k, FairFace etc.) will be combined with multilingual Asian text corpora. Baseline deep learning models (ResNet-50, EfficientNet-B4) will be evaluated across skin-tone categories using fairness metrics such as statistical parity difference and equal opportunity ratio. Bias mitigation will be explored through dataset rebalancing, adversarial debiasing techniques, and BERT-based sentiment analysis. Additionally, focus groups with Asian women professionals will help interpret model behavior and support the development of ethical design guidelines. Attendees will learn how fairness-aware AI design, cross-cultural datasets, and participatory validation can transform technology into a means of dismantling colorist hierarchies. Overall, this research seeks to advance algorithmic fairness and representational justice, envisioning an inclusive digital future where women of all skin tones are represented with dignity across Asia.

Bio

Image
Rifat Kabir Sharna
Rifat Kabir Sharna is an OMSCS student at Georgia Tech with a background in electrical engineering and information systems. She previously worked in the IT Division of a commercial bank, specializing in database and digital banking systems. Alongside her technical work, she has experience as a creative content creator focused on storytelling and cross-cultural connection. A dreamer and advocate for peace and equality, she aims to use technology to promote inclusion and positive social impact.

Modeling Preference Shifts via Reinforcement Learning in Semantic Spaces

Abstract

Recommendation systems on social media platforms are broadly understood to optimize for engagement metrics, yet the long-term effects of such optimization on user interests remain underexplored. This work presents a simulation-based study examining how engagement-optimized recommendation algorithms may influence the evolution of user preferences over time. Users and content are represented as vectors within a semantic embedding space, where RL agents act as synthetic users whose interest vectors update incrementally upon engagement. By simulating repeated recommendation and interaction cycles, we track how user positions drift through semantic space as a result of algorithmic exposure. This work operates within a controlled synthetic environment to explore whether preference drift can emerge as a structural consequence of engagement optimization. We further examine conditions under which synthetic users may be steered from one interest region to another through structured content sequencing. This work aims to bring forth discussion to the ethical implications of algorithmic influence and the potential for engagement-driven systems to shape user interests in ways that may not be immediately visible to users themselves.

Bio

Image
Amanjit Singh
Hi! I’m Aman Singh. I’m fascinated by how computers have shaped the world and how they can be used to help humanity, and I’m exploring research at the intersection of technology and society with plans to pursue a PhD. That said, when I’m not introducing 50 new bugs with my latest “fix,” you’ll find me at the gym, tracking cars (a hobby my rainy-day fund is noticing), or trying to make sense of Dostoevsky.

The Art of Procrastination: Personalized Cognitive Modeling for Goal-Aware Learning

Abstract

This poster examines how personalized cognitive modeling can help understand and reduce procrastination in adult learning. Building on interdisciplinary foundations in cognitive science, psychology, and artificial intelligence, the project treats procrastination not merely as delay, but as a cognitive–emotional process embedded within goal representation, motivation, and self-regulation. The work advances three main contributions. First, it proposes a personalized cognitive model for goal-aware learning that captures how learners set goals, track progress, regulate emotion, and adapt strategies over time. Second, it reconceptualizes procrastination as a dynamic interaction among affect, temporal decision-making, and perceived self-efficacy, distinguishing between productive delay and harmful avoidance. Third, it introduces an interactive AI-supported prototype that visualizes learning timelines, logs behavioral patterns, and delivers adaptive feedback and interventions. This work explores a computational perspective on self-regulated learning by examining how cognitive modeling principles might be translated into data-informed learning systems. It considers how combining emotional and motivational variables with behavioral trace data may support more nuanced personalization beyond traditional performance metrics. The poster discusses potential implications for designing adaptive tools for adult learners in both classroom and self-directed settings, and reflects on the shift from experiencing procrastination as a learner to studying it through formal cognitive and computational frameworks for goal-directed learning.

Bio

Image
Sareen Zhang
Sareen Zhang is an OMSCS student at Georgia Tech with a background in mathematics and computer science. She is interested in interdisciplinary research, especially at the intersection of cognitive science, AI, and education.

 

Demos

Simulating the Impact of AI coding tools on software developer’s work and psychology (virtual)

Abstract

This project demonstrates the cognitive and psychological effect of LLM powered coding tools on developers work ethics and skills. While these tools can enhance confidence and curiosity by helping in complex problem solving and decision making, they may also deskill the developer and induce fear of replacement. The project uses a multi agent system that simulates the human AI collaboration over the coding tasks and compare scenarios ranging from no-AI baselines to full AI autonomy, measuring outcomes such as productivity, trust, and human skill levels over time. Demonstration will have a web application (MVP) that uses an agent based computational model in multi agent system. It simulates interactions between software developers (human agents) and LLM coding tools (AI agents). The simulation will allow varying collaboration scenarios (no-AI, assistive AI, full autonomy) and track outcomes like productivity, trust and skill change. It will be implemented in Python (using Mesa for ABM, NumPy for computation, Matplotlib for visualization, Flask for API and User Interface), the tool will produce graphs and dashboards showing human–AI boundary shifts over time based on task complexity. It is a quantitative tool that can help organizations/researchers to balance human–AI collaboration over software development process.

Bio

Image
Reena Kamra
Reena Kamra is a Software Engineering Lead and product-focused developer working on scalable CRM and workflow systems. She specializes in backend architecture and data design, building systems that translate complex business needs into practical solutions. Alongside her professional work, she is pursuing her OMSCS from Georgia Institute of Technology, deepening her expertise in advanced computer science.

OMSCS Compass: Alumni-Driven Course and Career Recommendation System

Abstract

OMSCS Compass will be an interactive web-based analytics platform that will leverage alumni feedback to guide current and prospective OMSCS students in selecting courses and planning their academic journey. The system will aggregate alumni survey data and publicly available course reviews to uncover relationships between course choices, perceived workload, and real-world career outcomes. The demonstration will showcase a live, interactive dashboard where attendees can explore alumni insights through visualizations such as career outcome heatmaps, course sequence recommendations, and sentiment-based workload charts. Users will be able to input their own preferences—such as career goal, available weekly hours, or desired difficulty—and receive personalized course pathway recommendations derived from NLP and clustering models trained on alumni feedback. The session will be structured as a guided walk-through followed by interactive exploration, allowing attendees to see how the platform dynamically adapts recommendations based on user inputs and alumni data trends. Attendees will benefit by understanding how data-driven decision support can enhance academic planning in large-scale online programs. They will also learn how machine learning and natural language processing can transform unstructured alumni experiences into actionable insights for course and career planning. Ultimately, OMSCS Compass aims to foster a more informed, community-driven approach to navigating the OMSCS journey.

Bio

Image
Esha Mahendra
Esha Mahendra is a software engineer and OMSCS student passionate about solving complex engineering problems at the intersection of distributed systems, AI, and large-scale software platforms. With over five years of experience in backend engineering and automation, Esha focuses on building intelligent systems that leverage data and machine learning to improve decision-making. Through global collaboration in the OMSCS community, Esha's work explores how AI-driven insights can enhance learning experiences and technology adoption in modern, large-scale online education programs.

Yoshi: The Blame Game

Abstract

“Yoshi: The Blame Game" is a 3D interactive game developed by our team for “CS 6457: Video Game Design”. It combines narrative storytelling, environmental puzzles, and AI-driven animal behaviors to create an engaging learning experience in game design and interactivity. The game follows Yoshi, a loyal dog navigating a household filled with playful chaos caused by a mischievous cat while trying to complete tasks and earn the owner’s trust. The session will be fully interactive: after my initial live demo, attendees will be able to play the game themselves, experiencing how user inputs, in-game hints, and task logic combine to create a smooth and intuitive learning curve.Through this demonstration, participants will gain insight into game design structure, UI integration, and cross-script interaction in Unity.

Bio

Image
Rifat Kabir Sharna
Rifat Kabir Sharna is an OMSCS student at Georgia Tech with a background in electrical engineering and information systems. She previously worked in the IT Division of a commercial bank, specializing in database and digital banking systems. Alongside her technical work, she has experience as a creative content creator focused on storytelling and cross-cultural connection. A dreamer and advocate for peace and equality, she aims to use technology to promote inclusion and positive social impact.

Program

Check out the Program page for the full program!

Questions About the Conference?

Check out our FAQ page for answers and contact information!