Skip to main content

Use Cases

From user-generated content to LLM training data, FirstHandAPI powers location-tagged, auto-annotated data collection with one API call. Four categories, structured annotations on every file.

User-Generated Content (UGC)

Collect authentic, location-tagged photos, videos, and audio from real people. Every approved file comes with auto-generated annotations — object labels, OCR, scene classification, color palettes, and more. Perfect for brands, marketplaces, and social commerce platforms.

  • NYC storefront and venue photos geo-tagged to specific neighborhoods
  • Video testimonials filmed at landmarks like Central Park and the High Line
  • Audio reviews and voice testimonials from real customers
  • Location-specific product-in-use lifestyle photos across all five boroughs
curl -X POST https://api.firsthandapi.com/v1/jobs \
  -H "Authorization: Bearer fh_live_..." \
  -H "Idempotency-Key: ugc-campaign-001" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "data_collection",
    "description": "Photograph the storefront of any independent coffee shop. Include the sign and entrance in natural lighting.",
    "files_needed": 50,
    "accepted_formats": ["image/jpeg", "image/png"],
    "price_per_file_cents": 200,
    "location": {
      "city": "New York",
      "state": "NY",
      "latitude": 40.7128,
      "longitude": -73.9860,
      "radius_km": 10
    }
  }'

Ground Truth & Evaluation

Build verified reference datasets with structured annotations included on every file. Object labels, OCR text, scene classifications, and transcripts give your ML team ready-made ground truth for evaluating model accuracy and detecting regressions.

  • Street sign photos across Manhattan with auto-extracted OCR text
  • Subway and street audio from Midtown with speaker counts and transcripts
  • Intersection videos from Brooklyn with scene descriptions and action labels
  • Multi-modal evaluation sets combining NYC images, audio, and video
curl -X POST https://api.firsthandapi.com/v1/jobs \
  -H "Authorization: Bearer fh_live_..." \
  -H "Idempotency-Key: eval-set-v3" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "data_collection",
    "description": "Photograph street signs clearly, one sign per image. Must be legible and in focus.",
    "files_needed": 500,
    "accepted_formats": ["image/jpeg", "image/png"],
    "price_per_file_cents": 150,
    "location": {
      "city": "New York",
      "state": "NY",
      "latitude": 40.7580,
      "longitude": -73.9855,
      "radius_km": 5
    }
  }'

LLM Training Data

Source diverse, geo-tagged media at scale for multimodal fine-tuning. Every file arrives with structured annotations — object labels, scene classification, transcripts, color palettes — so your training pipeline starts with rich metadata, not raw blobs.

  • Thousands of NYC street-level photos with auto-generated object and scene labels
  • Accent-varied speech recordings from Queens with full transcripts
  • Short-form video clips from city parks with scene and action tracking
  • Multi-language audio samples from diverse NYC neighborhoods
curl -X POST https://api.firsthandapi.com/v1/jobs \
  -H "Authorization: Bearer fh_live_..." \
  -H "Idempotency-Key: training-batch-042" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "data_collection",
    "description": "Record 30-60s of natural conversation on any topic. Clear audio, minimal background noise.",
    "files_needed": 1000,
    "accepted_formats": ["audio/mp4", "audio/mpeg"],
    "price_per_file_cents": 300,
    "location": {
      "city": "New York",
      "state": "NY",
      "latitude": 40.7282,
      "longitude": -73.7949,
      "radius_km": 15
    }
  }'

Screen Capture & Recording

Collect app screenshots, workflow recordings, and UI testing data from real devices. Ideal for QA teams validating cross-device rendering, UX researchers studying real user flows, and teams building UI understanding models.

  • Cross-device screenshot testing for responsive design
  • User workflow recordings for UX research
  • App store screenshot generation across locales
  • UI interaction data for accessibility auditing
curl -X POST https://api.firsthandapi.com/v1/jobs \
  -H "Authorization: Bearer fh_live_..." \
  -H "Idempotency-Key: screen-capture-sprint-7" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "data_collection",
    "description": "Record a 60s walkthrough of the onboarding flow. Show each screen clearly.",
    "files_needed": 100,
    "accepted_formats": ["video/mp4", "video/quicktime"],
    "price_per_file_cents": 500
  }'

Ready to collect real-world data?

Sign up in under two minutes, get $2.50 in free credits (after email verification), and post your first data collection job today.