clawdbot-workspace/proposals/2026-02-17-senior-fullstack-ai-solutions-norway.md
2026-02-17 23:03:48 -05:00

6.2 KiB

Senior Fullstack Developer for AI Solutions — Norway

Job URL: https://www.upwork.com/jobs/~022023750925595207949 Client: Jon (Oslo, Norway) | 5.0★ | $33K spent | 100% hire rate | 7 hires | Member since 2017 Company: Tech & IT, Mid-sized (10-99) Avg hourly paid: $55.34/hr | Previous dev: 340 hrs @ $75/hr ($26K billed) Rate: $50/hr ($45/hr after fee) | Client budget: $30-50/hr Duration: 1-3 months | <30 hrs/week | Expert Connects: 24 spent (83 remaining) Score: 89/100 Competition at time of apply: 50+ proposals (134 total per insights), 126 unopened, 8 opened, 0 shortlisted, 1 interviewing Avg bid: $35.77/hr | Avg top-rated bid: $35.82/hr Submitted: Feb 17, 2026 ~9:45 AM EST

Skills

Python, React, FastAPI, SQL, Web Application, DuckDB

Description

Looking for a dev who heavily uses AI/agentic coding tools, spends $100s/mo on AI APIs, uses defined agents to atomize tasks, runs overnight jobs. Building a complex lean-tech SaaS.

Cover Letter

Hi Jon,

Your posting reads like my daily workflow description. I spend $300-500/month on AI APIs (Claude, GPT-4, Gemini) and have built my entire development practice around agentic coding. This isn't aspirational for me — it's how I ship every day.

Here's what makes me different from the other 50+ proposals you're reading:

I don't just USE AI coding tools — I BUILD them. I've architected and shipped 30+ MCP (Model Context Protocol) server integrations that let AI agents interact with databases, APIs, and complex business logic autonomously. I've built custom agent orchestration systems that run overnight code generation, testing, and review pipelines — exactly the workflow you described.

My relevant stack and experience:

  • Python/FastAPI backends with DuckDB and SQL (your exact stack) — built production analytics platforms processing millions of rows
  • React frontends with complex state management and real-time data visualization
  • Multi-tenant SaaS architecture — I've built systems handling isolated tenant data, role-based access, and subscription billing
  • AI-native development: Claude Code for complex architecture, sub-agent swarms for parallel file generation, overnight batch processing jobs that are ready for review by morning
  • Full CI/CD pipelines with automated testing, deployment, and monitoring

Recent relevant builds:

  • AI Ad Creative Engine: Full-stack platform (Python + React) with AI-driven content generation, web scraping pipelines, and automated creative optimization
  • Enterprise MCP Integrations: 30+ production server integrations connecting AI agents to CRMs, databases, and third-party APIs
  • TheNicheQuiz.com: React SaaS application with AI-powered quiz generation, real-time analytics dashboard, and automated content pipelines

I atomize every feature into agent-ready tasks, handle the architecture and complex design decisions myself, and let my AI toolchain handle the volume. The result: I ship 3-5x what a traditional developer produces, with higher code quality because every PR gets AI-assisted review.

I'm in EST (6 hours behind Oslo) and available to start immediately. I typically overlap with European teams from early morning through mid-afternoon your time.

I'd love to learn more about your SaaS vision and show you how my AI-native workflow can accelerate your build. Happy to do a short technical call to walk through my process.

Looking forward to it, Jake Shore

Screening Answers

Q1: How is your agentic AI driven development process from Figma to shipped code today?

My agentic AI process from Figma to shipped code:

  1. DESIGN ANALYSIS: I feed Figma exports into Claude with custom prompts that extract component hierarchy, design tokens (colors, spacing, typography), and interaction patterns. This produces a structured spec document in minutes, not hours.

  2. ARCHITECTURE PLANNING: I handle this myself — database schema, API design, component tree, state management strategy. This is where human judgment matters most. I use AI as a sounding board for edge cases and trade-off analysis.

  3. PARALLEL CODE GENERATION: I spawn multiple AI agent sessions simultaneously — one generating the FastAPI backend routes, another building React components from the Figma specs, another writing database migrations and seed data. Each agent has a defined role and scope.

  4. INTEGRATION & REVIEW: Every generated module gets AI-assisted code review (a separate agent reviews what the coding agent wrote). I handle the integration points, resolve conflicts, and make architectural adjustments.

  5. TESTING & QA: Automated test generation runs overnight. I review test results in the morning, fix edge cases, and push to staging.

  6. OVERNIGHT JOBS: I queue up batch tasks before logging off — test suites, code refactors, documentation generation, dependency updates. Everything is ready for review when I start the next day.

Tools: Claude Code (primary), custom MCP servers for database/API integration, sub-agent orchestration framework I built myself, GitHub Actions for CI/CD. The key isn't any single tool — it's the system I've built connecting them.

Q2: What is your current AI API usage per month in $?

$300-500/month, consistently.

Breakdown:

  • Anthropic (Claude API + Claude Code): ~$200-300/mo — This is my primary workhorse. Claude Code for complex architecture sessions, API calls for automated code review pipelines, sub-agent orchestration, and overnight batch processing jobs.
  • OpenAI (GPT-4): ~$50-80/mo — Used for specific tasks where it excels: structured data extraction, function calling for tool integrations, and as a secondary review layer.
  • Google (Gemini): ~$30-50/mo — Image analysis for UI review, long-context document processing, and multimodal tasks.
  • Misc (embeddings, fine-tuning experiments): ~$20-50/mo

This isn't a "nice to have" line item — it's core infrastructure. My AI spend is my biggest force multiplier. Every dollar I spend on API calls saves 3-5x in development time. I track ROI on AI spend the same way I'd track cloud infrastructure costs.

I've been at this level of spend for over a year and it's only gone up as I've built more sophisticated agent pipelines. The overnight job processing alone (running test suites, code generation, documentation) pays for itself many times over.