Blog

I deliver Gen AI to the App Store

  • How to Release Gen AI Apps on the App Store Fast

    How to Release Gen AI Apps on the App Store Fast

    Written by Alex Salvatore, Mobile Gen AI Expert with 10+ years in
    iOS development.
    Last updated: February 2026

    Publishing a Gen AI app on the App Store requires a compliance-first
    approach, not a code-first one. My first app took 10 months to ship. My
    second took 10 days—and hit the top 100 productivity apps. The
    difference was understanding that 60% of the work is App Store
    compliance, not code.

    Why App Store
    Compliance Matters More Than Code

    With tools like Claude Code, writing app code has become a commodity.
    Building a simple mobile app is now incredibly fast. But Gen AI apps
    that call LLM APIs face a different challenge: App Store guidelines.

    Apple has strict rules around: – AI-generated content
    disclosure
    – Users must know when content is AI-generated –
    Content moderation – You need filters for inappropriate
    outputs – Data privacy – Clear policies on how prompts
    and responses are handled – Age restrictions – AI apps
    often require 17+ ratings

    The 60/40 Rule

    Effort Area Time Allocation
    App Store compliance 60%
    Actual coding 40%

    The Optimal Tech
    Stack for Gen AI iOS Apps

    For fast iteration and App Store approval, use this proven stack:

    • Firebase – Authentication, analytics, and backend
      services with Apple Sign-In support
    • OpenRouter – Single API to access multiple LLM
      providers (OpenAI, Anthropic, Mistral)
    • SwiftUI – Native iOS interface that meets Apple’s
      design guidelines

    Why iOS First?

    iOS remains the most profitable mobile market for indie developers: –
    Higher average revenue per user (ARPU) – Users more willing to pay for
    premium apps – Smaller device fragmentation than Android – Better App
    Store discovery for new apps

    Key Strategy: Skip
    Monetization on V1

    Launch your first version without in-app purchases or subscriptions.
    This dramatically simplifies the App Store review process:

    1. Faster approval – Fewer compliance checkboxes
    2. User feedback first – Validate your app before
      building payment flows
    3. Iterate quickly – Add monetization once you have
      traction

    Key Takeaways

    • 60% compliance, 40% code – Focus on App Store
      guidelines first
    • iOS first – Most profitable market for indie
      developers
    • Firebase + OpenRouter stack – Proven,
      fast-to-implement architecture
    • No monetization on V1 – Get approved first,
      monetize later

    Frequently Asked Questions

    How long
    does App Store review take for Gen AI apps?

    Initial review typically takes 24-48 hours, but Gen AI apps may face
    additional scrutiny. Expect 1-2 weeks for first submission while Apple
    evaluates your content moderation approach.

    What LLM provider
    should I use for iOS apps?

    OpenRouter is recommended because it provides a single API to access
    multiple providers (OpenAI, Claude, Mistral). This lets you switch
    models without code changes and helps with cost optimization.

    Do I need
    content moderation for my Gen AI app?

    Yes. Apple requires that AI-generated content be filtered for
    inappropriate material. Implement server-side moderation before
    responses reach users. Most LLM APIs offer built-in safety filters.

    Can I use ChatGPT
    API directly in my iOS app?

    Yes, but consider using OpenRouter or a similar aggregator. Direct
    API integration works, but aggregators give you flexibility to switch
    providers and often better pricing through volume.


    Have questions about shipping Gen AI apps on the App Store? I’m a
    mobile Gen AI expert with multiple apps live. Try IndieScout
    ASO
    to validate your app idea’s market fit.

  • App Naming Strategy: How to Choose an iOS App Name That Drives Downloads

    Written by Alex Salvatore, Indie iOS Developer with multiple apps
    on the App Store.
    Last updated: February 2026

    A good app name can make or break your success on the App Store. Your
    app name is your first SEO lever—an optimized name drives free organic
    traffic, while a poor name means paying for every download. Here’s a
    proven framework for choosing names that convert.

    Why App Naming Matters for
    ASO

    App Store Optimization (ASO) starts with your app name. The App Store
    algorithm heavily weights:

    • Keywords in the title – Direct ranking factor
    • Search relevance – How well the name matches user
      queries
    • Brand memorability – Whether users can find you
      again
    • Feature association – What the app does at a
      glance

    The Free Traffic Equation

    A well-optimized name = free organic traffic = lower customer
    acquisition cost

    What you want: users finding your app through search, not paid ads.
    Your app name is the foundation of this strategy.

    The US iOS Market: Your
    Primary Target

    The US iOS market generates the highest revenue per user globally.
    When choosing a name:

    Consideration Recommendation
    Primary language English
    Target market United States
    Secondary markets UK, Canada, Australia
    Keyword research Focus on US search volume

    Even if you’re based elsewhere, optimize for US users first—they
    drive the most revenue.

    Essential Tools for App
    Name Research

    1. ASO Keyword Tools

    Research keyword difficulty and search volume: – AppTweak – Sensor
    Tower – Mobile Action

    2. Google AdWords Keyword
    Planner

    Validate demand outside the App Store. High search volume in Google
    often correlates with App Store interest.

    Identify rising trends and seasonal patterns. Time your launch with
    trending keywords.

    4. Domain Availability (.com)

    A matching .com domain strengthens your brand and enables web
    marketing funnels.

    The Golden Rule: Name Tied
    to Feature

    Your app name should immediately communicate what the app does. Users
    searching for a solution should understand your value proposition from
    the name alone.

    Example: “Great AI”

    • Alexander the Great + AI = humanities + artificial
      intelligence
    • Instantly communicates: AI tool for history/education
    • Memorable wordplay creates brand recognition
    • Keyword “AI” captures search traffic

    Good vs Bad App Names

    Type Example Why
    Good “PhotoFix Pro” Clear function (photo editing) + quality signal (Pro)
    Good “Budget Tracker” Direct keyword match for user search
    Bad “Zephyr” No function indication, hard to find via search
    Bad “MyApp 2024” Generic, no keyword value, dated

    App Naming Framework

    Follow this process to find your optimal name:

    1. List core features – What does your app actually
      do?
    2. Research keywords – Find high-volume,
      low-competition terms
    3. Generate combinations – Feature + keyword + brand
      element
    4. Check availability – App Store, domain, social
      handles
    5. Validate demand – Google AdWords estimates
    6. Test memorability – Can users spell and remember
      it?

    Key Takeaways

    • Target US iOS market first – Highest revenue
      potential
    • Use ASO tools – Data-driven name selection
    • Name = feature – Users should know what you do
      instantly
    • Free traffic is the goal – Optimize for organic
      discovery

    Frequently Asked Questions

    How long should my app name
    be?

    Apple allows 30 characters for your app name. Use keywords
    strategically, but keep it readable. Aim for 20-25 characters that
    balance branding with keyword optimization.

    Should I include
    keywords in the app name?

    Yes, but naturally. “Photo Editor Pro” is better than “Photo Editor
    Keywords Photo Edit Camera.” Apple penalizes keyword stuffing and may
    reject apps with obvious manipulation.

    Can I change my app name
    after launch?

    Yes, through App Store Connect. However, changing names can
    temporarily affect your rankings as the algorithm reindexes. Plan name
    changes carefully and avoid frequent changes.

    How do I research keywords
    for ASO?

    Use dedicated ASO tools (AppTweak, Sensor Tower) to find keyword
    search volume and difficulty. Complement with Google Keyword Planner to
    validate broader demand. Look for keywords with decent volume but lower
    competition.

    Does the app
    name affect App Store search rankings?

    Absolutely. The app name is one of the strongest ranking signals.
    Keywords in your title directly influence which searches your app
    appears in. This is why strategic naming is crucial for organic
    discovery.


    Want to validate your app name with real App Store data? Try IndieScout
    ASO
    to research keywords and find market opportunities.

  • Claude Code and the Coding Singularity: How AI Changed Software Development

    Claude Code and the Coding Singularity: How AI Changed Software Development

    Written by Alex Salvatore, Indie iOS Developer with multiple Gen
    AI apps on the App Store.
    Last updated: February 2026

    We have entered the coding singularity. With Claude Code and Opus
    4.5, developers can now one-shot complex features that took weeks to
    build manually. Code has become a commodity—the real value is now in
    product vision, market understanding, and execution speed.

    The Evolution of
    AI-Assisted Coding (2025-2026)

    One year ago, “vibe coding” with LLMs hit a glass ceiling. Beyond
    basic features, the AI couldn’t keep up with real application
    complexity. That changed dramatically in 12 months:

    Timeline Capability
    Summer 2025 Claude Code arrives—finally possible to build entire apps
    November 2025 Opus 4.5 enables one-shotting complex features
    January 2026 Complete production apps built in days, not months

    Real-World Results

    This week, I built a complete ASO (App Store Optimization) tool that
    crawls iTunes to find mobile app opportunities. The app is already
    submitted for App Store review.

    Comparable tools in this space charge $108/year (see Astro App as a
    reference). An entire SaaS product, built in one week.

    Why Claude Code Dominates
    the Market

    After testing multiple AI coding assistants, Claude Code stands
    out:

    • Superior UI generation – Creates production-quality
      SwiftUI and React interfaces
    • Context retention – Maintains understanding across
      large codebases
    • Complex feature handling – One-shots features that
      require multiple file changes
    • Iterative refinement – Improves code through
      conversation, not just generation

    Comparison: Claude
    Code vs Mistral Code CLI

    Aspect Claude Code Mistral Code CLI
    UI generation Excellent Significantly behind
    Complex features One-shot capable Requires more iteration
    Developer sentiment Universally positive Mixed reviews
    Cost efficiency Worth the premium Cheaper but less capable

    What This Means for
    Developers

    The implications are profound. Code itself is no longer the
    differentiator. Value has shifted to:

    • Product vision – Understanding what to build
    • Market insight – Knowing who needs it
    • Execution speed – Shipping before competitors
    • Distribution – Getting the product to users

    The New Developer Skillset

    Successful developers in 2026 focus on:

    1. Problem identification – Finding valuable problems
      to solve
    2. AI collaboration – Effectively directing AI coding
      assistants
    3. Quality assurance – Reviewing and refining
      AI-generated code
    4. Product marketing – Positioning and selling the
      finished product

    Key Takeaways

    • Coding singularity is here – Complex apps can be
      built in days
    • Claude Code leads – Especially for UI generation
      and complex features
    • Value has shifted – From code quality to product
      vision and market fit
    • Speed is everything – Build and ship fast, iterate
      based on feedback

    Frequently Asked Questions

    What is the “coding
    singularity”?

    The coding singularity refers to the point where AI can generate
    production-quality code faster than humans can write it manually. We’ve
    reached this point—complete applications can now be built in days using
    AI assistants like Claude Code.

    Is
    Claude Code worth the cost compared to free alternatives?

    Yes, for production applications. Claude Code’s ability to one-shot
    complex features and generate quality UI saves significant development
    time. The productivity gains far outweigh the subscription cost for
    serious projects.

    Can AI coding
    assistants replace human developers?

    Not entirely. AI excels at code generation but requires human
    direction for product vision, market understanding, and quality
    assurance. The role of developers is shifting from writing code to
    directing AI and ensuring quality.

    How does vibe coding work
    in practice?

    Vibe coding involves describing what you want in natural language and
    letting the AI generate the implementation. With Claude Code, you can
    describe complex features (“add user authentication with Apple Sign-In”)
    and receive working code that integrates with your existing
    codebase.

    What skills
    should developers learn for the AI era?

    Focus on product thinking, market research, user experience design,
    and AI collaboration. Technical skills still matter for code review and
    debugging, but the ability to identify valuable problems and direct AI
    effectively is increasingly important.


    I’m an indie iOS developer with multiple Gen AI apps on the App
    Store. Try IndieScout
    ASO
    to validate your app idea’s market fit.

  • Mistral AI: The GDPR-Compliant Alternative to Chinese LLM Models

    Mistral AI: The GDPR-Compliant Alternative to Chinese LLM Models

    Written by Alex Salvatore, Mobile Gen AI Expert with 10+ years in
    iOS development.
    Last updated: February 2026

    Mistral AI offers the best GDPR-compliant alternative to Chinese LLM
    models like DeepSeek, Qwen, and Kimi. At 3.5x cheaper than Gemini Flash
    and natively hosted in the EU, Mistral Small 3.1 delivers both cost
    savings and regulatory compliance for European developers.

    The Problem with Chinese AI
    Models

    Chinese LLM providers offer attractive pricing, but come with
    significant compliance risks:

    • Data sovereignty – Your data is processed and
      stored in China
    • Regulatory uncertainty – Italy has already banned
      several Chinese AI services
    • Ongoing investigations – France and Belgium are
      actively investigating compliance
    • GDPR violations – Data transfers to China lack
      adequate protection mechanisms

    European Regulatory Actions

    Country Status
    Italy Chinese AI models banned
    France Under investigation
    Belgium Under investigation
    Germany Reviewing compliance

    Mistral Small 3.1:
    The Sovereign Alternative

    Mistral AI, headquartered in Paris, offers a fully GDPR-compliant
    solution:

    • EU-native hosting – Data never leaves European
      servers
    • 100% GDPR compliant – Built for European
      regulations from day one
    • Apache 2.0 license – Self-hosting option
      available
    • Competitive pricing – Significantly cheaper than US
      alternatives

    Pricing Comparison

    Model Input Cost Output Cost 1000 tasks/month
    Mistral Small 3.1 $0.03/M tokens $0.11/M tokens $0.52
    Gemini 2.0 Flash $0.10/M tokens $0.40/M tokens $1.80

    Result: Mistral is 3.5x cheaper than Gemini
    Flash.

    Smart Model Routing Strategy

    Optimize costs by routing tasks to the appropriate model:

    def route_task(task_type: str) -> str:
        if task_type in ["extraction", "formatting", "summary"]:
            return "mistral-small-3.1"  # Cost-effective for simple tasks
        elif task_type in ["complex_analysis", "synthesis"]:
            return "gemini-2.0-flash"  # Better for complex reasoning

    When to Use Mistral Small
    3.1

    • Text extraction and parsing
    • Document summarization
    • Data formatting and transformation
    • Simple classification tasks
    • Translation (especially French)

    When to Use Higher-Tier
    Models

    • Complex multi-step reasoning
    • Creative writing requiring nuance
    • Tasks requiring latest knowledge
    • Code generation for complex systems

    Key Takeaways

    • Avoid Chinese AI models for GDPR compliance—data
      sovereignty risks are real
    • Mistral Small 3.1 is 3.5x cheaper than Gemini
      Flash
    • EU-native hosting means zero data transfer
      concerns
    • Smart routing between models optimizes both cost
      and quality

    Frequently Asked Questions

    Is Mistral AI fully GDPR
    compliant?

    Yes. Mistral AI is a French company that hosts all data in the
    European Union by default. They have built their infrastructure
    specifically for GDPR compliance, making them a safe choice for European
    businesses.

    Can I self-host Mistral
    models?

    Yes. Mistral releases many models under the Apache 2.0 license,
    allowing self-hosting. This gives you complete control over your data
    and eliminates third-party data processing concerns entirely.

    How does
    Mistral compare to OpenAI for European apps?

    Mistral offers better GDPR compliance since data stays in the EU.
    OpenAI’s data processing involves US servers, which complicates GDPR
    compliance. For cost-sensitive tasks, Mistral Small 3.1 is also
    significantly cheaper.

    What happened
    with Chinese AI models in Italy?

    Italy’s data protection authority (Garante) took action against
    several Chinese AI services over GDPR concerns, including data transfer
    violations and lack of transparency about data processing.


    Sources:Mistral AI Help Center: Data hosted in EU by
    default
    Mistral pricing: mistral.ai/pricingGemini
    pricing: ai.google.dev/pricing

    Questions about integrating Mistral into your apps? I’m a mobile
    Gen AI expert with multiple apps on the App Store. Try IndieScout
    ASO
    to validate your app idea.

  • Kimi AI vs ChatGPT: GDPR-Compliant Ways to Use Chinese AI Models in Europe

    Kimi AI vs ChatGPT: GDPR-Compliant Ways to Use Chinese AI Models in Europe

    Written by Alex Salvatore, Mobile Gen AI Expert with 10+ years in
    iOS development.
    Last updated: February 2026

    Chinese AI models like Kimi AI aren’t automatically banned under
    GDPR—only direct API calls to Chinese servers are problematic. Kimi K2.5
    and DeepSeek are open source, which means you can use them legally in
    Europe through EU-hosted providers or self-hosting. Here are three
    GDPR-compliant options with real pricing.

    The Real GDPR Problem
    with Chinese AI

    The common assumption is wrong. “Chinese model = GDPR violation” is
    overly simplistic.

    What’s actually prohibited: – Direct API calls to
    Moonshot (China-hosted) – Data transfers to Chinese servers without
    adequate safeguards – Using services that store data in China

    What’s allowed: – Open-source models hosted in the
    EU – Self-hosted instances on European infrastructure – Third-party
    providers with EU data residency

    Recent Regulatory Actions

    Country Action Date
    Italy Banned DeepSeek (direct API) January 30, 2025
    France CNIL investigation ongoing 2025
    Belgium Under investigation 2025

    The bans target direct Chinese API access, not the open-source models
    themselves.

    Option 1: Kimi K2
    via Nebius (Simplest Solution)

    Nebius is a Dutch company hosting Kimi K2 on
    European infrastructure. Your data never touches Chinese servers.

    Nebius Pricing

    Metric Cost
    Input (cached) $0.15/M tokens
    Input (uncached) $0.60/M tokens
    Output $2.50/M tokens
    Context window 131K tokens

    GDPR Advantages

    • Dutch company – EU jurisdiction
    • European infrastructure – Data stays in EU
    • No CLOUD Act – Unlike US providers
    • Configurable data residency – Choose your
      region

    Cost for 1,000 tasks/month: ~$1.85

    Option 2: Self-Host
    DeepSeek R1 Distill 70B

    DeepSeek R1 is open source. The 70B distilled version offers
    excellent performance-to-cost ratio for high-volume applications.

    Hardware Requirements

    Configuration VRAM Notes
    2x NVIDIA A100 80GB ~140GB FP16 Full precision
    1x A100 80GB ~70GB INT4 Quantized

    GCP europe-west9 (Paris)
    Pricing

    Instance Type Hourly Cost Monthly Cost
    On-demand ~$5-6/hour ~$3,600/month
    Spot VMs (70% discount) ~$1.50/hour ~$1,100/month

    Performance: ~25 tokens/second

    Break-even point: Self-hosting becomes
    cost-effective at ~50,000 requests/month.

    # vLLM deployment on GCP Paris
    vLLM + DeepSeek-R1-Distill-Llama-70B
    Instance: a2-ultragpu-2g (2x A100 80GB)
    Region: europe-west9 (Paris)
    Spot VMs for cost optimization

    Option 3: Self-Host
    Mistral Small 24B

    Lighter and more accessible for smaller operations.

    Hardware Requirements

    Configuration VRAM
    1x NVIDIA A100 80GB ~55GB BF16
    2x L40S ~55GB total

    GCP Paris Pricing

    Instance Type Hourly Cost Monthly Cost
    On-demand A100 80GB ~$3/hour ~$2,200/month
    Spot VMs ~$1/hour ~$730/month

    Performance: ~40-50 tokens/second

    Complete Cost Comparison

    Solution Input $/M Output $/M Self-host/month GDPR Status
    Kimi K2 API (Moonshot) $0.60 $2.50 Not compliant (China)
    Kimi K2 via Nebius $0.15-0.60 $2.50 Compliant (Netherlands)
    DeepSeek R1 70B self-host $1,100 (spot) Compliant
    Mistral Small 3.1 API $0.10 $0.30 Compliant (France)
    Mistral Small 24B self-host $730 (spot) Compliant

    Cost Per
    1,000 Tasks (1K input, 500 output tokens each)

    • Mistral API: $0.25
    • Kimi via Nebius: $1.85
    • Self-host: $730-1,100 fixed monthly
      (volume-dependent ROI)

    Recommendations by Volume

    Low Volume (<10K
    requests/month)

    Use Mistral API. At $0.25 per 1,000 tasks, it’s
    unbeatable for cost and simplicity. French company, EU-hosted, fully
    compliant.

    Medium Volume + Kimi
    Requirement

    Use Nebius. Same Kimi model, GDPR-compliant Dutch
    infrastructure. No changes to your code beyond the API endpoint.

    High Volume (>50K
    requests/month)

    Self-host on GCP Paris. DeepSeek R1 70B or Mistral
    Small 24B with Spot VMs. Fixed monthly cost becomes more economical than
    per-token pricing.

    Key Takeaways

    • Chinese models aren’t banned – Only direct API
      calls to China are problematic
    • Open source = options – Kimi K2.5 and DeepSeek can
      be legally hosted in Europe
    • Nebius is the easy path – Same Kimi model, Dutch
      infrastructure, API-compatible
    • Self-hosting at scale – Becomes cost-effective
      above 50K requests/month
    • Mistral for simplicity – French, cheap, and fully
      compliant

    Frequently Asked Questions

    It depends on how you use it. Direct API calls to Moonshot (the
    Chinese company behind Kimi) violate GDPR because data is processed in
    China. However, using Kimi K2.5 through EU-hosted providers like Nebius
    or self-hosting is fully compliant.

    What’s the
    difference between Kimi and DeepSeek?

    Both are Chinese open-source LLMs with strong performance. Kimi K2.5
    (by Moonshot AI) has a 131K context window and excels at multilingual
    tasks. DeepSeek R1 is known for reasoning capabilities. Both can be
    hosted in the EU for GDPR compliance.

    Why did Italy ban DeepSeek?

    Italy’s data protection authority banned direct access to DeepSeek’s
    API because it involves data transfers to China without adequate GDPR
    safeguards. The ban doesn’t apply to self-hosted instances or EU-based
    hosting providers running the open-source model.

    Is self-hosting
    worth it for small applications?

    Generally no. At less than 50,000 requests per month, API services
    like Mistral ($0.25/1K tasks) or Nebius are more cost-effective.
    Self-hosting requires infrastructure management, and the fixed monthly
    cost only makes sense at scale.

    How
    does Nebius achieve GDPR compliance with a Chinese model?

    Nebius is a Dutch company running Kimi K2.5 on European
    infrastructure. The model weights are open source—Nebius simply runs
    them on EU servers. Your data never leaves European jurisdiction,
    satisfying GDPR requirements.


    Sources:Nebius Token Factory
    Pricing
    DeepSeek
    R1 GPU Requirements
    GCP GPU
    Pricing
    DeepSeek
    Italy Ban – Euronews

    Questions about integrating GDPR-compliant LLMs into your apps?
    I’m a mobile Gen AI expert with multiple apps on the App Store. Try IndieScout
    ASO
    to validate your app idea.

  • The Missing Link: Why AI Won’t Scale Without Smart Glasses

    The Missing Link: Why AI Won’t Scale Without Smart Glasses

    2 weeks ago, Mark Zuckerberg revealed the Meta Ray-Ban vision during the Meta Connect. People classify this product at the same technological level as VR and AR, taking the example of the Vision Pro, or more similarly the Google Glasses, both huge flops.

    But the Meta glasses have a plus.

    AI.

    And I think that if Meta Glasses surely need AI, AI needs Meta glasses.

    “I use Chat GPT as a general purpose tool, and I hope that in a few years I will use it in more things than I do. It’s obviously still terribly integrated into most people’s workflows, but it’s going to get better at that.”

    – Sam Altman, on David Perell’s podcast: https://www.youtube.com/watch?v=6pxmdmlJCG0&t=895s

    For anyone that has tried to build anything serious with AI, you quickly find that the tech is not the real challenge, it’s how you implement it in the user routine and how you make it easy for the user to fill their context.

    This is why the best AI apps are wired to email and calendar. And this concept is nothing without AI capacity to analyze the result of its predictions. It’s called the cybernetic feedback loop: AI prediction → positive or negative feedback → re-adjustment of the next prediction toward a positive feedback. It is the base concept of what Bryan Johnson does with his project Don’t Die: every day he varies his daily routine and monitors his sleep quality and then maximizes the element that reinforces his sleep quality and minimizes what weakens it.

    Principle diagram of a cybernetic system with a feedback loop

    This capacity for AI to reach its feedback is very easy in mathematics, video games, and programming, where everything happens in a very simple virtual and closed environment. This is why all the exponential growth of AI currently only happens in these fields.

    If you want to expends the exponential to other field you need live visual and audio context. You need a new, AI first device. This is why both Mark Zuckerberg and Sam Altman, with its collaboration with Jony Ive, are after it.

    By creating my Alexander the Great, an app running on ancient Greek books, people really underestimate how hard it is to insert even an incredible app into their daily life, for 2 reasons: it needs to be somehow ritualistic, and you have no idea how it could help. It’s why top apps are fitness apps (ritualistic) and general purpose AI apps (whatever you ask, you will always have a reply).

    Meta Glasses will probably create room for a third category. Peripatetic apps, anyone?

  • From Zero Users to a $270 Million Opportunity: Building the Future of Classical Education

    From Zero Users to a $270 Million Opportunity: Building the Future of Classical Education

    I built a mobile application focused on Ancient Greek texts, polished it to perfection, and released it on the app store. Despite its genuine usefulness, I found zero audience. What I didn’t realize was that I had stumbled into a $270 million problem—and potentially, its solution.

    The Personal Journey That Started It All

    A few years ago, when my son was born, I found myself carefully selecting stories to share with him. I wanted tales that would not only entertain but also teach him about life, character, and wisdom. This quest led me to fall in love with classical education.

    There was one problem: I wasn’t a classical scholar, just a programmer. Leveraging tech to get my access to all these ancient text felt like the perfect move. I gathered all the ancient Greek texts about Alexander the Great and built them into an application I called “The Greatest Alexander the Great Application.”

    The result exceeded my expectations:

    • More accurate than Wikipedia
    • Providing valuable advice for most life questions
    • Consistently delivering actionable insights every time I used it

    Yet despite creating something genuinely valuable, I faced a harsh reality: zero audience.

    The Hidden $270 Million Market

    My failure led me to a surprising discovery. I wasn’t alone in this struggle: I had unknowingly entered the classical education market, worth $270 million in the US alone.

    Source: Based on 677,500 students enrolled in classical education (2023-24, Arcadia Education market analysis) × estimated $400 per student spending on specialized curriculum materials

    But this market faces a critical challenge: it’s shrinking fast. Ancient Greek enrollment dropped from 22,000 US students in 2006 to just 13,000 in 2016—a 40% decline in a single decade.

    Source: Modern Language Association of America 2016 report

    Why Video Games Aren’t the Answer

    Many educators point to gamification as the solution. The theory sounds compelling: make learning fun through games, and engagement will follow.

    Revenue Cat, State of subscriptions 2025

    But the data tells a different story. According to RevenueCat’s 2025 report, video games generate the lowest per-install revenue of any app category. Despite 15 years of hype around educational gaming, there are virtually zero examples of successful commercial educational games.

    Meanwhile, on the other side of the spectrum, we see genuine success stories like Alpha School in Austin and Ad Astra (SpaceX’s private school)—both traditional, high-quality educational institutions that people pay premium prices to attend.

    The Commercial Validation Principle

    “If a product is World Class… it is a product, and somebody commercial has bought this product. It is very implausible that you will have the best software product in the world, and you never sold it commercially.”

    – Alex Karp, CEO of Palantir

    If I can’t convince people to pay for my solution, how can I be certain it actually solves their problem? The path forward isn’t to become the “Zynga of Classical Education” but rather to build something that combines the AI capabilities of a Grok with the practical wisdom and social media approach of a Ryan Holiday.

    The Rebrand: From Alexander to Aristotle

    I’ve begun transforming my Alexander the Great app into an “Ancient Coaching” application—essentially a virtual Aristotle. This better captures the app’s true purpose: providing personalized wisdom and guidance drawn from classical sources.

    My strategy leverages several key elements:

    Platform Focus: iOS as the primary platform, the US App Store being 70% share of all the mobile revenue.

    AI Integration: The RAG (Retrieval-Augmented Generation) system I developed works remarkably well, providing contextually relevant advice from classical sources

    Rewards: Creating a cybernetic feedback loop where the more someone uses the app, the more equipped they become to share its value with others

    Future Technology: Preparing for emerging platforms like Meta’s Ray-Ban smart glasses, which could seamlessly integrate classical wisdom into daily life flow. No ned to fill manually the prompt context anymore (Just imagine getting Aristotle live as rehtorics advisor during an interview).

    The Vision: Capturing Aristotle’s Worldview

    “My hope is someday, we can capture the underlying worldview of that Aristotle in a computer and someday, some student will be able to ask Aristotle a question and get an answer.”

    – Steve Jobs

    That future is closer than we think. With current AI capabilities and the vast corpus of classical texts available, we can begin to approximate that experience today.

    What’s Next

    My journey is just beginning. With the Android version live and the iOS version in development, I’m building not just an app but a new category: AI-powered classical education that people actually want to use—and pay for.

    The decline in classical education isn’t inevitable. It’s a market opportunity waiting for the right approach: one that combines ancient wisdom with modern technology, practical value with commercial viability.

    The question isn’t whether there’s demand for classical wisdom—it’s whether we can package and deliver it in a way that fits into modern life. I believe the answer lies in AI, and I’m committed to proving it.


    Follow my journey as I build the future of classical education. The next post will dive deep into the technical architecture of creating a virtual Aristotle and the early user feedback that’s shaping the product. You can follow me on X.com and Linkedin.

  • Google’s Photoshop Killer Gets Killed by Flux-Kontext

    Google’s Photoshop Killer Gets Killed by Flux-Kontext

    Google Gemini’s image editing feature threatened Photoshop before getting overshadowed by this open-source model. I’m telling you all about Flux Kontext, where to get it, and what prompts I use.

    I’ve become addicted to Flux Kontext. Flux is a family of generative flow-matching models that can generate new images from text prompts and edit existing images, developed by Black Forest Labs, based in Freiburg, Germany. Their model was used by X for its image generation feature.

    What makes Flux Kontext revolutionary is its capacity to edit and merge existing images very quickly. It’s the perfect model not only for professional image manipulation and restoration, but also for indie app developers like myself.

    Here are some wild examples:

    Strengths and Limitations

    Here are the important things I learned from my experiments with the model:

    Strengths:

    • Amazing at editing images (changing colorimetry, contrast, adding elements or backgrounds to paintings, restoring black and white photos, and copying styles)
    • Perfect for image restoration, like enhancing photos that were too dark
    • Can add or alter elements of pictures without changing their style, including backgrounds

    Limitations:

    • Struggles with complex Renaissance paintings due to too many details
    • Cannot complete missing parts of pictures – for example, it wasn’t able to generate missing parts of paintings or mosaics. However, it can have some amazing results, like I experienced with the Pompeii Alexander Mosaic, especially from the Darius III side
    • Sometimes it just doesn’t work and you don’t know why

    Where to Use It

    It’s fully available on Replicate. You need to create an account on Replicate.com, enter your credit card information, and go to the model page at https://replicate.com/black-forest-labs/flux-kontext-pro, or you can use the model via API. Each run costs about $0.04 per image for 5-4 seconds of generation.

    Example Prompts

    Prompt: Make it a modern Japanese anime style, be careful with the cape of the character, add a green dominant color to the picture, and add a green forest in the background with ancient Greek ruins and a very tall black tower with fire on its top, and make the sky dark and red

    Prompt: Add a sunrise and an ancient Greek city in the background, and add a serpent on the ground in the right corner and the silhouette of an eagle in the sky.

    Prompt: Restore an ancient Greek fresco depicting the mythological scene of the Rape of Persephone by Hades. The composition features Hades, with wild red hair, driving a chariot pulled by white horses as he abducts Persephone. She is shown struggling or reaching out, draped in flowing purple robes, while a female figure (possibly a companion or goddess) reacts in the background. Use authentic Hellenistic fresco style, with textured wall surface, weathered edges, and a faded but vibrant color palette dominated by ochres, reds, purples, and ivory tones. Keep the anatomical proportions and expressions true to the original. Goal: High-resolution restoration that evokes the original vibrancy while preserving historical accuracy. [As I mentioned, Flux-Kontext cannot complete/imagine missing parts of a picture, so heavily damaged pictures require you to come up with a very long prompt, which I generate with ChatGPT by providing the damaged picture and describing what it represents]

    Prompt: Make the photo look like a 1990s American TV show style

    Prompt: Make it look like a Renaissance painting, add a lion’s head hat on the head and a sword to the man on the left, and a turban/Persian hat to the man on the right, add ancient Greek ruins in the background.

    Prompt: Make it look like a 1990s American movie style.

  • Implementing RAG on Ancient Greek Text: A Technical Journey

    Implementing RAG on Ancient Greek Text: A Technical Journey

    I am currently working on an ambitious project that involves performing RAG on Ancient Greek text.

    RAG stands for Retrieval Augmented Generation. It’s designed to solve a recurring problem with LLMs: the gaps in their knowledge base and their limited context. Even though these contexts have already improved—with Google Gemini’s 1M tokens—it’s barely enough to compare two different versions of the Bible. This process retrieves the needed documents using semantic queries before feeding them to your prompt to augment it.

    For this project, I’m using GCP Vertex AI, Genkit, and Firebase with Node TypeScript.

    Firebase is an array of cloud services that include database, authentication, AI, and other tools. Genkit is Google’s framework that allows you to code, perform, and chain numerous AI operations, calling different AI models, vector databases, and enabling agentic behavior. This framework can be executed in a local UI, greatly simplifying development by providing the results of each method in the chain of actions. Vertex AI is a collection of GCP services specialized in AI.

    The Genkit UI

    When I first saw RAG tutorials, I was hoping to make it work in one week. It took me more than a month. Here is an important extract of the text (shown here in English) I wanted to experiment with:


    [359d] which men say once came to the ancestor of Gyges the Lydian. They relate that he was a shepherd in the service of the ruler at that time of Lydia, and that after a great deluge of rain and an earthquake the ground opened and a chasm appeared in the place where he was pasturing; and they say that he saw and wondered and went down into the chasm; and the story goes that he beheld other marvels there and a hollow bronze horse with little doors, and that he peeped in and saw a corpse within, as it seemed, of more than mortal stature,

    [359e] and that there was nothing else but a gold ring on its hand, which he took off and went forth. And when the shepherds held their customary assembly to make their monthly report to the king about the flocks, he also attended wearing the ring. So as he sat there it chanced that he turned the collet of the ring towards himself, towards the inner part of his hand, and when this took place they say that he became invisible

    [360a] to those who sat by him and they spoke of him as absent and that he was amazed, and again fumbling with the ring turned the collet outwards and so became visible. On noting this he experimented with the ring to see if it possessed this virtue, and he found the result to be that when he turned the collet inwards he became invisible, and when outwards visible; and becoming aware of this, he immediately managed things so that he became one of the messengers

    [the story continues for 3 paragraphs]


    The Technical Foundation

    To perform RAG, your documents need to be chunked using the “llm-chunk” npm library. This process breaks your text into smaller pieces, making them ready for vectorization. The vectorization process stores the text in semantic form, capturing the meaning of words put together. It’s important to note that this vectorization is performed by third-party cloud services, which means the amount of chunks you can vectorize in one request is limited—you can hardly go beyond vectorizing more than 700 KB of text in a single request.

    To store these vectorized texts, you need a vector database. I chose Firestore because I’m a Firebase enthusiast, but Supabase and even Quadrant also offer vector databases.

    The First Challenge: Embedding Models

    Here comes my first problem: the Genkit RAG tutorial recommended using ‘text-embedding-005’. My initial tests on English text were going great, until I tried to search for the story of Gyges in the Greek version of Plato’s Republic. It was impossible to get any valuable information from the RAG process. This is because non-Latin text requires a special embedding model—textMultilingualEmbedding002—that enables RAG on Greek, Hindi, Russian, Japanese, Arabic, and tons of other languages.

    [Reference: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api?hl=fr]

    The Second Challenge: Narrative Coherence

    Great! I got Gyges working. But now I had another problem: Gyges is only mentioned once in Plato’s Republic, but his story spans four paragraphs and ends up split across multiple chunks that won’t be retrieved during semantic search. This means you end up with only part of Gyges’ story, not the complete narrative.

    Chunking takes configuration parameters to increase chunk size and overlap between chunks. A website like ChunkViz (https://chunkviz.up.railway.app/) helps you preview what different chunk lengths and overlaps will look like. However, in my case, increasing the overlap to even 400 characters wasn’t sufficient—I needed at least 10 times that size.

    ChunkViz visual preview

    One workaround would have been creating a second request to query all documents adjacent to the retrieved ones. That would have worked, but I found a far more innovative method that opens up infinite possibilities.

    The Creative Solution

    Compromise is a JavaScript NLP (Natural Language Processing) library that extracts names and places from texts (along with verbs and adjectives). While running your RAG pipeline, you can simultaneously populate an entire database of places and characters. Of course, getting Compromise to work on Greek text wasn’t easy, but thanks to a suggestion from Claude, I created a Greek-to-Latin alphabet conversion table. This allowed me to romanize names while preserving letter case, which finally enabled Compromise to extract the famous Γύγης (Gyges).

    The final step was adding the names and places cited in the last four paragraphs to each text segment before vectorization. Thanks to this method, I was able to retrieve the complete story of Gyges from Plato’s Republic in one request.

    Here is the final result in Genkit UI

    Beyond the Basics

    RAG is complex, and this post only scratches the surface. RAG can implement metadata and rerankers—it all depends on your use case and the problem you want to solve. In my upcoming Alexander the Great project, I have to deal with an incredible number of sources, each with its own opinion about what really happened. I use reranking to ensure that responses to questions are diverse enough, countering RAG’s natural tendency to always retrieve the same author due to their more concise writing style.

    [Reference: https://genkit.dev/docs/rag/]

    Conclusion

    This post ends here, but not my newsletter—it’s where I plan to share my best work about Classical Education and AI, completely free. I have the conviction that both subjects are linked. If you’re not already a subscriber, you should be, because both Classical Education and AI represent precious knowledge right now.

  • Hello world!

    Welcome to WordPress. This is your first post. Edit or delete it, then start writing!