Agentic Data Analysis for Market Intelligence — Memtrace Growth

Salary Unpaid

Can you turn a wall of GitHub stars, Reddit threads, and pricing pages into a competitive position that actually changes how a product goes to market? Can you run an AI agent the way a good analyst runs a spreadsheet — with a clear question, a rigorous method, and a conclusion you are willing to defend? Do you understand the difference between data and intelligence?

If yes — we want to talk.

At Syncable, we are building the memory layer for AI coding agents. Our core product is Memtrace — a bi-temporal episodic knowledge graph that gives AI agents (Claude Code, Cursor, Copilot, Windsurf) persistent, replayable memory for the first time. 70 to 90 percent fewer wasted tokens. First-attempt success on multi-file tasks. One binary, MCP-native, running on production codebases today.

The category does not fully exist yet. That is the opportunity — and the problem we need to solve together.


The company

AI coding agents are the fastest-growing software category on the planet. Cursor. Copilot. Claude Code. Every one of them hits the same ceiling: they cannot remember what they did thirty seconds ago. When something breaks, they start over.

Memtrace solves this. Every change the agent makes is recorded as a replayable episode. The entire codebase becomes a live knowledge graph. Two temporal timelines let the agent rewind to any moment. Something breaks — it does not restart. It rewinds. Replays. Fixes forward.

We are MCP-native from day one. No integration code. One binary. Plugs directly into every major coding agent on the market.

Distribution is frictionless by design: MCP marketplaces, Homebrew, one command. No sales team required for the first million installs.


The role

Memtrace needs intelligence, not just data. We are building the first persistent memory layer for AI coding agents — but we need to know exactly where we sit in a market that is moving fast, who is chasing the same category, how developers discover and decide on tools like ours, and what the right pricing architecture looks like as we scale from solo installs to team and OEM tiers.

You will own this intelligence function end to end. You will use AI agents as your primary research tool — not to replace your thinking, but to compress the time between a question and a defensible answer from days to hours.


What you will own:

Competitor intelligence. Run systematic, agent-assisted monitoring of the three competitive vectors that matter most to Memtrace: agent memory tools (mem0, Zep, MemGPT / Letta, LangGraph memory), code intelligence and IDE platforms (Cursor, Copilot, Codeium, Windsurf), and RAG / vector database infrastructure (Pinecone, Weaviate, ChromaDB). This is not a monthly slide deck. This is a living picture — weekly GitHub star and contributor trajectory, documentation diffs that surface feature releases before they are announced on social, pricing page changes, sentiment in developer communities (HackerNews, r/LocalLLaMA, r/MachineLearning, Discord). You will use Claude and other AI agents to process the volume, but the synthesis — what this means for Memtrace's positioning — is yours.

GTM analysis. Map the actual distribution paths that move developer tools from zero to traction. Where do AI coding developers congregate when they are evaluating new tools? What problems surface in GitHub issues, Cursor Discord, Claude Code forums, and developer X/Twitter that Memtrace can credibly solve? Which content formats — benchmarks, side-by-side demos, token cost comparisons — actually convert developers who are skeptical of yet another AI tooling claim? You will run agent-driven scans across these surfaces, identify the moments of highest intent, and translate those signals into GTM recommendations: which communities to seed first, which message lands with which persona, what the conversion path looks like from Homebrew install to paid team seat.

Pricing analysis. The open core model is clear — free solo tier, paid team tier, OEM licensing. What is less clear is where exactly to draw the lines, what the market expects at each tier, and how to frame the value proposition in terms developers and engineering leads actually use to justify spend. You will track competitor pricing across the memory and RAG category, analyze community discussions about pricing sensitivity (what is the threshold where developers upgrade vs. stay free?), and model the value Memtrace delivers in concrete terms: if we save 70 to 90 percent of wasted tokens, what does that translate to in dollars per month at different usage levels? Those numbers should be in every sales conversation and every pricing page before the team tier ships.

Intelligence synthesis. Every week, you will produce a short, sharp intelligence brief — what changed in the competitive landscape, what the GTM data is telling us, what pricing signals are worth acting on. Short and clear beats long and comprehensive. The goal is decisions, not documentation.


How AI agents fit into this work

You will use Claude, Perplexity, and other AI agent tools as research infrastructure — not as a shortcut to skip thinking. Concretely, this means:

Running structured agent queries across GitHub APIs, HackerNews search, Reddit, and developer forums to surface competitor signals at a cadence that would be impossible manually. Feeding competitor documentation, changelog entries, and pricing pages into AI analysis workflows to identify what changed and what it means. Using agents to synthesize large volumes of community discussion into the signal that actually matters: developer frustrations, unmet needs, and language patterns that indicate purchase intent. Building repeatable agent workflows — not one-off queries, but structured processes you can run weekly and compare over time.

The agents do the volume. You do the interpretation. The output is intelligence that shapes how Memtrace goes to market.


Who you are

  • You are studying data science, economics, business, computer science, or something adjacent — or you have done this kind of work before and can show it.

  • You can think analytically and write clearly. You know the difference between "there is a lot of activity in this space" and "mem0 has grown from 8K to 22K GitHub stars in 90 days, primarily driven by a Python SDK release, which means the Python developer segment is the one to watch."

  • You use AI tools actively in your research workflow and have opinions about how to get useful output from them, not just prompts that produce noise.

  • You are curious about developer tooling and the AI coding agent market. You do not need to be an engineer, but you need to care enough about what Memtrace actually does to explain it accurately to someone who has never heard of it.

  • You are comfortable with ambiguity and can define your own research questions when given a direction but not a script.

  • You speak and write fluently in English.


What you will gain

  • Deep hands-on experience running agentic research workflows on a real competitive intelligence problem — at the moment this methodology is becoming a core skill in data and strategy roles everywhere.

  • A complete picture of how a technical B2B developer tool goes to market: from community seeding to open core conversion to OEM licensing.

  • Real outputs that ship: the competitive intelligence you produce will directly inform how Memtrace is positioned, priced, and distributed. Not slides for an internal presentation. Actual decisions.

  • A portfolio of structured intelligence work that demonstrates analytical thinking, AI tool fluency, and market judgment — the combination that is genuinely hard to find and increasingly valuable.

  • Direct exposure to a founding team building infrastructure for the AI coding ecosystem, at the moment the category is being defined.

Perks and benefits

This job comes with several perks and benefits

Remote work allowed
Remote work allowed

Central office
Central office

Free coffee / tea
Free coffee / tea

Skill development
Skill development

Flexible working hours
Flexible working hours

Healthcare insurance
Healthcare insurance

See all 12 benefits

Working at
syncable

Syncable — The Cloud OS for the Agentic Era AI now generates code faster than teams can understand, deploy, or govern it. That's the new bottleneck — and it widens every sprint. Three crises hit simultaneously as AI coding adoption accelerated: 1) The Understanding Gap. 50% of all code is now AI-generated. 2) 1 in 4 AI samples contain a confirmed vulnerability — 2.74× more than human-written code. 3) 75% of developers say reviewing AI code is harder. Comprehension debt compounds every sprint. The Governance Gap. Static analysis sees code. Observability sees runtime. Cloud tools see infra. Nothing connects all three. 82% of companies have accumulated security debt; 80% of tech debt will be architectural by 2026 (Gartner). The Deployment Gap. 50%+ of code is AI-generated, but ~0% is deployed autonomously. Teams choose between PaaS lock-in or DIY complexity — at a cost of €450k+ per year in platform engineering alone. Syncable closes all three gaps with one platform: Deploy. Operate. Understand. 1) Deploy any cloud in 20 minutes. One command, zero YAML, BYOC — your cloud, your keys, EU sovereign. 2) Operate with MTTR reduced from 60 minutes to under 5. A knowledge graph links alert → commit → config change in under 30 seconds. 3) Understand your entire system through a Temporal Knowledge Graph — a living, graph-native record of every service, change, dependency, and deployment event, queryable at any point in history. No single competitor provides all three layers with a temporal knowledge graph underneath. That graph is the data moat: it compounds with every repo, every deployment, every incident. What Cursor did for coding, Syncable does for deployment — plus the world model that governs it all.

Read more about syncable

company gallery image