Envira turns satellite and environmental data into decision-ready intelligence for finance, government and agriculture. We combine Earth Observation, modern ML and AI, and regulatory expertise to deliver auditable, business-ready risk metrics from parcel-level agricultural insight to property-level flood scoring. You will join a small, technical team building tools that banks, insurers and public agencies use to make critical climate and financial decisions.
We are looking for an ambitious student to help us scale the data backbone behind our products. You will work closely with the core engineering and modelling team on the databases and pipelines that store, serve and process billions of satellite observations and millions of parcel- and property-level records. Though formally a student position, you will have real responsibility and be able to shape our technical development. We expect someone with relevant, hands-on experience who can contribute from day one. The focus is on building reliable, performant Postgres-based systems and on running large-scale computations efficiently against them.
Design, maintain and evolve our PostgreSQL/PostGIS schemas for geospatial and time-series data.
Tune queries, indexes and storage layouts so our pipelines run fast and predictably at scale.
Build and improve data ingestion and processing pipelines that move Earth Observation data from raw tiles to product-ready tables.
Support large-scale computation across our cloud environment (Google Cloud) and on-prem workstations.
Contribute to monitoring, backups and data quality checks so the team can move quickly with confidence.
Work collaboratively with the modelling and product teams and take ownership of defined tasks.
Currently enrolled in a BSc or MSc programme in computer science, software engineering, data engineering, or a related field.
Solid, hands-on experience with PostgreSQL: schema design, writing non-trivial SQL, understanding indexes and query plans.
Comfortable with Python and Git, and with building data pipelines or batch jobs.
Familiarity with large-scale or distributed computing concepts — e.g. parallel processing, cloud compute, working with datasets that do not fit in memory.
Good engineering practices: version control, testing, clear code, and the ability to debug systems beyond your own code.
Based in or near Copenhagen and available for regular in-person collaboration.
Nice to have: experience with PostGIS or other geo-spatial-temporal databases, raster/vector data formats (e.g. GeoTIFF, Parquet, Zarr), Google Cloud (GCS, BigQuery, Cloud Run), Docker, or workflow orchestration tools (e.g. Airflow, Prefect, Dagster).
Real responsibility on the data infrastructure that supports our customers.
Close mentorship from senior colleagues and direct exposure to how a climate/fintech product is built end-to-end.
Hands-on experience with large geospatial datasets in an applied, impactful setting.
Flexible hours to fit your studies and a collaborative office environment.
Apply on The Hub with: CV, grade transcripts, a short motivation (one page max) and links or attachments showcasing relevant work (GitHub, project reports, schemas or queries you are proud of). We prioritise demonstrated experience and practical results. If you have built or operated databases or data pipelines — in coursework, internships, side projects or work — we want to hear from you.
This job comes with several perks and benefits
