Hannah K - Research Program Mentor | Polygence
profile picture

Hannah K

- Research Program Mentor

MEng at Stanford University

Expertise

AI Ethics, Human Centered Design, Tech Policy, Accessible AI design, Computer Graphics, Computer Vision, ML

Bio

I care deeply about building AI and computing systems that are safe, transparent, and aligned with human values. I studied Computer Science and Human-Computer Interaction at Stanford, focusing on responsible tech design and AI safety. My work looks at how we shape system behavior before and after deployment, especially in high-stakes areas like cybersecurity and law, where poor design can lead to real harm. I’m most excited by research that blends technical work with human context—like auditing LLMs for hallucinations, studying the privacy risks of network protocols, and designing benchmarks that reflect real-world use. Outside of research, I’ve played cello for 15 years and was in the Stanford Symphony Orchestra for three. Skiing is one of my favorite things in the world, and I just got my scuba diving certification! When I’m not outdoors or playing music, I’m usually taking photos or editing videos just for fun.

Project ideas

Project ideas are meant to help inspire student thinking about their own project. Students are in the driver seat of their research and are free to use any or none of the ideas shared by their mentors.

Designing a Mental Health Reflection App with AI

I work at the intersection of AI, design, and human-computer interaction. I can help students explore how AI can support mental health, self-reflection, and well-being—while also examining the tradeoffs around privacy and emotional dependency. In this project, the student will learn foundational UX and product design skills, including wireframing, user research, and design critique. They’ll also be introduced to basic natural language processing concepts (like sentiment analysis) and ethical considerations in mental health technology. The student will begin by identifying a target user (e.g., teens managing stress or students tracking moods), then sketch or prototype a journaling app that uses AI to generate reflective prompts or feedback. Through research and interviews, they’ll examine key questions: How do we keep user data private? When should AI offer support vs. stay silent? Possible outcomes include app wireframes, a design case study, or a low-code prototype using tools like Figma or React Native. This project is ideal for students interested in psychology, ethics, wellness, and the future of emotionally intelligent AI.

Designing Ethical AI Systems: A Case Study and Redesign Project

My background is in AI ethics, human-computer interaction, and responsible technology design. I help students think critically about how AI systems shape our lives—and how we might redesign them to align with human values like fairness, transparency, and well-being. In this project, the student will explore the inner workings and social impacts of a real-world AI system—such as facial recognition in public spaces, TikTok’s recommendation algorithm, or predictive policing tools. They’ll learn how to research technical systems, identify harms (like bias, addiction, or surveillance), and evaluate existing critiques from journalism, academia, and public discourse. The student will gather information through articles, academic papers, videos, and interviews with peers or users. They’ll map out how the system works, who it affects most, and what values are embedded in its design. The final product could be a visual redesign (using Figma, Canva, etc.), a mock policy brief, or a white paper that proposes improvements—such as algorithmic transparency, consent mechanisms, or more inclusive data practices. This project is ideal for students interested in tech ethics, design justice, or making AI safer and more equitable.

AI for Civic Discourse: Analyzing Political Bias in Language Models

I specialize in AI ethics, language models, and the social impacts of emerging technologies. I can help students explore how AI systems shape public conversation, political understanding, and trust in information. In this project, the student will examine whether AI chatbots (like ChatGPT, Claude, or Gemini) express subtle political biases when answering sensitive or controversial questions. They’ll learn how to design fair prompts, define an evaluation rubric (e.g., tone, framing, omission of context), and analyze responses across different models or prompt variations. The student will gather data by testing multiple AI systems with politically charged or value-laden prompts. They’ll organize and code the outputs using spreadsheets or Python tools, and reflect on patterns they observe—like ideological leanings, hesitations, or overconfidence. The final outcome could be a research paper, an annotated dataset with visualizations, or a blog post explaining the implications of AI bias in civic life. This project is great for students interested in politics, journalism, computer science, or the future of public dialogue in the age of AI.

Coding skills

Python, C++, C, JS

Languages I know

Korean, intermediate

Teaching experience

At Stanford, I’ve served as a teaching assistant and assistant lecturer for CS148 (Computer Graphics) for the past three years, supporting students through both foundational and advanced topics in graphics. I’ve also mentored high school students on independent research and passion projects, especially in the area of AI—guiding them through ideation, technical development, and communication of their work.

Credentials

Work experience

Stanford Regulation and Evaluation Lab (2021 - 2025)
Researcher
Stanford Empirical Security Research Group (2024 - 2025)
Researcher
Gremix (2023 - 2024)
Computer Vision Engineer

Education

Stanford University
BSE Bachelor of Science in Engineering (2024)
Computer Science
Stanford University
MEng Master of Engineering (2025)
Human-Computer Interaction

Interested in working with expert mentors like Hannah?

Apply now