Interhuman AI builds social intelligence infrastructure that makes AI systems capable of understanding human behavior. Our API detects and interprets behavioral signals like hesitation, engagement, confusion, interest across voice, facial expressions, body language, and words.
We’re looking for a motivated intern to join our AI and data team in Copenhagen. This is a hands-on role where you’ll work directly with our AI and behavioral science teams on the data, annotation, and interpretation infrastructure that powers our social intelligence API.
Work hands-on with real multimodal data (video, audio, transcripts) used to train and evaluate our models. You’ll help structure, document, and review datasets to ensure consistency, clarity, and usability for behavioral AI systems.
Help review and refine our catalog of social signals and behavioral cues. This includes validating cue definitions, multimodal annotation of clips, and helping maintain consistency across our behavioral science frameworks.
Review model outputs and interpretation results to assess whether they make sense from a behavioral science perspective. You’ll help identify mismatches between raw signals, contextual cues, and high-level interpretations.
Assist in identifying relevant datasets, scientific literature, and annotation practices related to human communication, social interaction, and behavioral observation.
Help spotting anomalies, unclear cases, or systematic issues in data and annotations - and clearly documenting what you find so the team can act on it.
You might be a great fit for this role if you meet some or all of the following:
You're studying or have completed a degree in Psychology, Behavioral Science, Cognitive Science, Human-Computer Interaction, or a related field.
You’re curious about how people communicate through voice, facial expressions, posture, and micro-behaviors. Experience with behavioral coding, observational research, or annotation work is a plus.
Some experience with coding schemes, taxonomies, or rule-based systems helps when working on our interpretation layer.
You know your way around Gen-AI tools for analysis, research, or light prototyping. You don’t need to be an engineer, but you should enjoy exploring new tools and figuring things out. Experience with Python, JavaScript, or similar is helpful but absolutely not required - curiosity and willingness to learn matter more.
You’re comfortable figuring things out independently and know when to ask for help.
You can explain concepts clearly, whether technical or behavioral.
You can balance multiple tasks and adapt as things move quickly.
We understand how important flexibility is. This position requires a minimum of 8 hours of work per week, with the option of more if you're interested. The minimum length of the internship is 2 months. While the internship is unpaid, we offer close mentorship, real responsibility, and hands-on experience at the intersection of AI, behavioral science, and multimodal data. This internship is ideal for someone who wants hands-on experience in AI and behavioral data at an early-stage startup, is curious about how AI can understand human behavior, and enjoys working in a collaborative environment where their input helps shape how our product interprets social signals.
At Interhuman AI, we’re building social intelligence for AI. Our mission is to enable machines to understand how humans actually behave – not just what we say. We’re pioneering a new class of multimodal AI that reads facial expressions, voice, and body language to interpret social signals, delivering AI interactions that feel truly adaptive and emotionally aware.
We’re a small but ambitious team (backed by top investors) with a working prototype, early paying customers, and a vision to become essential infrastructure in the future of conversational AI. We value curiosity, high standards, and collaboration – and we also believe in keeping things fun and supportive.
This job comes with several perks and benefits
