This summer, I had the privilege of serving as a Research Intern at the Computational Social Science Lab (CSSLab) at Penn, advised by Mark Whiting and Duncan Watts. Operating under a hybrid agile-scrum environment, I conducted research alongside three other undergraduate peers to study human behavior at scale, particularly on integrative experiments in quantifying individual and collective common sense. As a computer science junior with minimal prior research experience, I was always excited by Penn’s unique research opportunities, and I was immediately drawn by the potential to apply my skills from the classroom and a previous summer internship to a research domain I was always interested in exploring—the intersection between social behavior and computational technology. Having dived in numerous rabbit holes on a similar topic, I was increasingly enticed by the need to systematically and accurately understand the concept of common sense, especially for building more reliable decision-making capabilities in AI applications. But in order to teach common sense to machines, I needed to understand what common sense represents for *all* humans.
Over the course of the summer, I extracted many valuable insights and skills. I learned that common sense remains poorly understood and elusive among humans, and understanding it is necessary for the advancement of both social science and AI. Expanding upon the research authored by Mark Whiting and Duncan Watts, I sought to understand the variability of common sense across different languages by leveraging various computational methods, including full-stack web development, data science, and natural language processing. My work was motivated by the need to extend the reach of the original framework to a global audience by enabling multilingual support for the main experiment.
My responsibilities for the high-level task included enabling multilingual support for the main experiment website in 10 languages on both the frontend and backend, reducing experiment scaling efforts by streamlining data and text processing pipelines, and introducing additional enhancements and fixes to other code repositories. For multilingual support, I used Typescript and external libraries to seamlessly adapt the user interface based on a user’s language preferences. On the backend, I enhanced the tracking and analysis of language-specific data, ensuring accurate retrieval and storage of multilingual statements and user responses, assisting in more targeted research insights. Through tools like OpenAI, Pandas, Amazon Translate, and Github Actions, I introduced automated pipelines to translate large directories of statement files in new languages efficiently and render those statements consistently on the platform’s interface. Throughout this entire process, I honed my knowledge of foundational software development practices, such as adapting and learning new technologies for each task, navigating large codebases and troubleshooting, ensuring proper code maintenance through documentation and code reviews, and clearly communicating my approaches to solutions to my advisor and peers. Many of these skills are transferable—even beyond a technical setting. I intend to use these skills in any environment that requires me to develop solution-oriented applications.
By the end of my PURM experience, I learned the importance of experimenting early in your professional and academic career. Before starting my research internship, I remember being hesitant about whether or not research was right for me. But now that I had the time to reflect on this opportunity, I am deeply grateful for the experience and connections I made. This experience fit perfectly with my academic aspirations of promoting social good in the tech space, and I believe it’ll help me achieve new milestones as I continue thinking about how I want to shape my professional career in the future.