# Phil Johnston — Full Content > Developer Relations professional, photographer, and builder --- ## Code Is Commodity. Art Direction Is the Moat. URL: https://philjohnstonii.com/blog/hb3sdhgq74qbq63nhp90s6h4gf5cpa Summary: AI can generate code, but it can’t replace the human eye for design. Game artists, trained in composition, color theory, and spatial storytelling, are uniquely positioned to shape the next generation of software interfaces. Code Is Commodity. Art Direction Is the Moat. In “The Future of Micro-Niche AI Tools,” I ended with the idea that AI unlocks human creativity rather than replacing it. I still believe that. But I have been thinking about which humans specifically benefit the most from this shift, and I keep arriving at the same unexpected answer. What AI Can and Cannot Generate AI can generate code. It generates it fast, and for most routine tasks it generates it well enough to ship. AI can generate documentation, test cases, business logic, database schemas, and deployment configurations. Give it a well-written prompt and it will produce working software in minutes. What AI struggles with is coherent visual worldbuilding. Not isolated images. Image generation is impressive and getting better every month. The problem is consistency. Creating a visual system where every element feels like it belongs to the same world, where the color palette tells a story, where the spatial layout guides attention, where the emotional tone is deliberate and sustained across every interaction. This is not a matter of better training data or larger models. Coherent visual worldbuilding requires the kind of intentional design thinking that comes from understanding how humans experience space, light, color, and emotion simultaneously. It requires the ability to make a hundred small aesthetic choices that all reinforce the same feeling. Why Game Artists Have Exactly These Skills Video game concept artists, environment designers, character designers, and UI artists have spent decades learning to build immersive, consistent visual systems under extreme constraints. A game environment needs to be beautiful, but it also needs to communicate gameplay information. A character design needs to be distinctive at 20 pixels tall on screen. A UI layout needs to be readable during a fast-paced action sequence. These artists work within tight technical budgets (polygon counts, texture memory, frame rate targets) while creating worlds that players spend hundreds of hours inside. That constraint-driven creative process produces a specific kind of skill: the ability to make a limited palette of tools create an emotionally complete experience. That skill is exactly what on-demand software needs. The Experience Layer In the first post of this series, I introduced the concept of “striking an instance.” A user describes what they need, and software gets generated for them. The logic is commodity. The data layer is standardized. The SDLC pipeline ensures reliability. But what makes a user actually want to use the tool? What makes them return to it tomorrow instead of generating a different one? The experience layer. The way the interface looks, feels, and responds. The visual consistency that makes a complex tool feel simple. The micro-interactions that make data entry feel less like a chore. Most AI-generated interfaces today look like what they are: functional layouts with default styling. They work, but they do not feel intentional. There is no art direction. No visual system. No sense that someone thought about how the colors, typography, spacing, and motion work together to create a specific experience. Game artists think in exactly these terms. They call it “visual language” or “art direction,” and it encompasses everything from the macro (overall mood and setting) to the micro (how a button highlights when you hover over it). This is the layer that AI cannot generate from a text prompt, because the prompt would need to encode hundreds of aesthetic decisions that the artist makes intuitively. This extends far beyond the gaming industry. Product design for consumer applications, AR and VR interfaces, AI agent front-ends, dashboard visualization, and even the micro-niche tools I have been writing about all benefit from someone who thinks in visual systems. Consider the property management tool I am building. The data layer is Markdown in a git repository. The API is FastAPI. The front-end is HTMX. All of that is functional. But the difference between a tool I tolerate and a tool I enjoy using comes down to visual decisions that have nothing to do with the code. Does the maintenance request list feel urgent when there are overdue items? Does the financial summary feel trustworthy? Does the contractor assignment flow feel efficient? These are visual and interaction design questions, and the people best equipped to answer them are the ones who have spent careers making digital experiences feel emotionally coherent. I come at this from a different angle. My photography practice, heavy on macro, surreal compositions, and pattern recognition, has taught me that visual intentionality changes how people experience information. A data table can communicate the same numbers as a well-designed dashboard, but the dashboard tells a story. That storytelling layer is what game artists bring to every project. The Iterative Creative Process There is another reason game artists are uniquely positioned for this moment. Their entire workflow is iterative and constraint-driven. A concept artist does not paint a final piece from scratch. They sketch, get feedback, revise, get more feedback, and refine through dozens of iterations within strict technical constraints. That is exactly the workflow that AI-assisted creation demands. The human sets the direction, the AI generates options, the human curates and refines, the AI iterates. Game artists have been working this way for decades. The only thing that changed is the tool doing the initial generation. What This Means As on-demand software proliferates, the differentiator shifts from functionality to experience. Code becomes commodity. Data standards become infrastructure. SDLC becomes automated. What remains is the experience layer: the visual and interaction design that makes software feel like it was made for a human, not generated by a machine. The people who know how to create that experience, who think in visual systems, who can maintain aesthetic consistency across complex interactive environments, who work iteratively under tight constraints, are game artists. They have been training for this moment for 30 years. If you are building AI-generated tools and wondering why they feel generic, the answer is not better prompts. It is better art direction. And the talent pool for that skill is sitting in studios making virtual worlds, waiting for someone to realize their skills apply far beyond games. The 80/20 Rule for AI Code Review --- ## The 80/20 Rule for AI Code Review URL: https://philjohnstonii.com/blog/wc9s23btpmv673x8k9ihp15w2b668e Summary: LLM-powered agents can simulate QA, security, and architecture expertise well enough to catch 80% of the issues a human reviewer would. The question is whether that last 20% still justifies the cost of a dedicated specialist. The 80/20 Rule for AI Code Review My agent pipeline has a QA agent, a Security agent, and an Architect agent. Each one reviews the work of the Engineering agents, checking for bugs, vulnerabilities, and design inconsistencies. They are not humans. They are LLM-powered roles that simulate domain expertise. And they are surprisingly good. Good enough that I have started asking a question I did not expect to ask this soon: when does simulated expertise actually replace the real thing? What Acting Experts Do Well An LLM prompted to act as a security auditor can catch a remarkable number of common issues. SQL injection patterns, insecure default configurations, hardcoded credentials, missing input validation, outdated dependency versions. These are pattern-matching tasks, and pattern matching is what language models excel at. The same applies to QA. A model asked to review code for test coverage gaps, edge case handling, and regression risks will flag issues that many junior and mid-level engineers would miss in a first pass. It does not get tired. It does not rush because it is Friday afternoon. It applies the same scrutiny to the 200th file as it does to the first. In my pipeline, these acting experts operate as automated gates. Code does not advance to the next stage until the QA agent confirms test coverage, the Security agent clears the vulnerability scan, and the Architect agent validates that the implementation matches the specification. Each review is structured, consistent, and fast. For routine review tasks, this works. It works well enough that I trust it for the majority of checks in my personal projects. Where Acting Experts Fall Short Here is where it gets interesting. There is a category of review that language models are bad at, and it is the category that matters most for high-stakes decisions. Taste. Is this API design elegant, or just functional? Does this error message help the user, or just satisfy the requirement? An LLM can tell you whether an error message exists. It cannot tell you whether the error message makes a frustrated developer feel supported or confused. Contextual judgment. Should this feature be built at all? Is this the right abstraction, or will it create problems six months from now when requirements change? These questions require understanding the business context, the team dynamics, and the product strategy in ways that a prompt cannot fully capture. Novel edge cases. Acting experts are trained on patterns. They catch known patterns well. They struggle with situations that do not map to existing patterns. A truly novel security vulnerability, one that exploits an interaction between two systems in an unexpected way, is exactly the kind of thing that slips past a simulated reviewer because it does not match anything in the training data. Ethical gray areas. Is this feature manipulative? Does this data collection practice respect user privacy even if it is technically legal? These are judgment calls that require values, not just knowledge. I have seen this in my own work. I have spent over 15 years reviewing developer content, documentation, and tutorials. The things I catch that an AI reviewer misses are almost always about tone, not accuracy. The documentation is technically correct but subtly condescending. The tutorial works but teaches a bad habit. The API design is functional but will frustrate anyone who tries to extend it. The 80/20 Split My working hypothesis is that acting experts are sufficient for roughly 80% of review tasks. The routine checks, the pattern-matching, the compliance validation, the test coverage analysis. All of this can and should be automated. It is faster, more consistent, and more thorough than human review at scale. The remaining 20% is where humans earn their keep. Novel situations, creative judgment, ethical considerations, and the kind of taste-based feedback that comes from years of experience in a domain. These are the moments where a human reviewer says “this is technically fine but it is going to confuse people” and that feedback changes the trajectory of the product. The pattern I use in my pipeline reflects this split. AI agents handle the automated gates. For high-stakes decisions, there are explicit approval checkpoints where a human reviews the output before it advances. The property management system I am building has owner-approval gates for financial decisions and legal documents. Same principle, different domain. Human in the Loop Is Not a Compromise There is a temptation to frame “human in the loop” as a temporary measure. A crutch that we will eventually discard as AI gets better. I do not think that is right. Human judgment is not a less efficient version of AI review. It is a fundamentally different kind of review. It draws on embodied experience, emotional intelligence, aesthetic sensibility, and ethical reasoning that language models simulate but do not possess. The goal is not to remove humans from the loop. The goal is to put them in the right part of the loop. Do not waste a senior architect’s time checking whether variable names follow the style guide. That is what the AI reviewer is for. Put the senior architect in front of the design decisions that will determine whether the system scales, whether the developer experience is good, and whether the abstraction layer is right. What This Means in Practice If you are building AI-assisted workflows, the question is not “can AI replace human review?” It is “which reviews should AI handle, and which reviews need a human?” For automated testing, vulnerability scanning, style compliance, and coverage analysis: let the acting experts handle it. They are better at consistency and scale than any human team. For design taste, strategic direction, novel risk assessment, and ethical judgment: keep humans in the loop. Not because AI cannot attempt these tasks, but because the cost of getting them wrong is high and the value of getting them right compounds over time. The 80/20 split is not a permanent ratio. As models improve, the boundary will shift. But the category of tasks that requires human judgment will not shrink to zero. It will just become more concentrated on the decisions that matter most. Code Is Commodity. Art Direction Is the Moat. Vibe Coding Got You the Prototype. Now What? --- ## Vibe Coding Got You the Prototype. Now What? URL: https://philjohnstonii.com/blog/pua6p9xgs4r03fjb8570vkp17cabzm Summary: Vibe coding is great for prototypes, but production software still needs guardrails. Here’s why AI-generated code demands the same SDLC discipline we’ve always applied, and how an 8-stage agent pipeline keeps things from falling apart. Vibe Coding Got You the Prototype. Now What? This is the third post in a series about on-demand software. The first covered why most business software is about to become regenerable. The second explained why document standards are the interoperability layer that makes it all work. This one is about the part that makes it trustworthy. Because here is the uncomfortable truth about vibe coding: it is fantastic for getting something working. It is terrible for keeping it working. The Prototype Trap I speak from experience. Earlier this year I built a terminal-based tool for reading community forum data. The first version came together in an afternoon. I described what I wanted, an LLM generated the code, and I had a working TUI by end of day. Then I used it the next day. And the day after that. And within a week, I had found three bugs, wanted two new features, and needed to refactor the data layer because my initial description had not accounted for edge cases that only showed up with real usage. That iteration process, finding bugs, adding features, refactoring as requirements clarify, is the software development lifecycle. I was not following a formal SDLC process. I was just doing what every developer does when a prototype becomes a daily-driver tool. But the practices were the same: version control to track changes, testing to catch regressions, and careful iteration to avoid breaking what already worked. The moment on-demand software becomes something a user depends on, it needs exactly these guardrails. Without them, every improvement risks breaking something else. Every new feature is a coin flip between progress and regression. Why AI Generates Regressions Fast LLMs are incredibly good at generating code. They are also incredibly good at generating regressions. The same capability that lets a model build a working application in one session also lets it subtly break that application in the next session when you ask for a change. This is not a flaw in the technology. It is a fundamental property of how generative models work. Each generation is statistically independent. The model does not remember that the authentication flow depends on a specific token format, or that the date parser expects ISO 8601, or that the CSS layout breaks if you change the grid columns. It generates the best response to the current prompt, and sometimes that response contradicts a decision made in a previous generation. In traditional software development, this problem is solved by automated testing. You write tests that encode your assumptions, and those tests catch it when a change violates them. In the AI-generated software world, the same principle applies. You need a pipeline that validates output before it reaches the user. What a Pipeline Looks Like I have been building this exact system. My agent pipeline project defines eight workflow stages and nine agent roles that mirror a traditional SDLC process, compressed and automated. The flow works like this. An Orchestrator agent receives a task description and routes it to a Project Manager agent. The PM produces a structured brief. An Architect agent turns that brief into a technical specification. Engineering agents build from the spec. QA agents run automated tests. Security agents scan for vulnerabilities. Accessibility agents check compliance. And only after all of those gates pass does the output move toward deployment. Each handoff is a checkpoint. Each checkpoint has defined acceptance criteria. The agents are not making subjective judgment calls about whether something “looks right.” They are validating against specific, codified requirements. This is SDLC. It is not bureaucracy. It is the thing that makes the difference between a prototype that works once and software that works reliably over time. The Five-Tier Reality Not every project needs the full pipeline. A quick utility script does not need a security audit and accessibility review. A production application handling financial data needs all of that and more. My framework defines five project tiers based on complexity and risk. A micro task, something that takes minutes, gets minimal process. A production deployment gets the full treatment: architecture review, test coverage requirements, security scanning, accessibility validation, and staged rollout. The insight is that SDLC is not one-size-fits-all. The practices scale with the stakes. Vibe coding is perfectly appropriate for tier one. It is dangerously insufficient for tier four or five. The pipeline provides the right level of rigor for each context. Security Is Not Optional Here is the part that keeps me up at night. AI-generated code inherits whatever patterns the model learned from training data. Some of those patterns include insecure defaults, outdated authentication methods, and vulnerable dependency versions. In my pipeline, a git push triggers automated security scanning. Known vulnerability patterns are checked against the generated code. Issues are reported as structured findings that the Engineering agent must resolve before the code can merge. This is not paranoia. This is standard practice in any mature engineering organization. The difference is that when humans write code, security review happens during code review. When AI generates code, security review needs to be automated because the volume and velocity of generated code makes manual review impractical. On-demand software that skips security validation is on-demand liability. The tools need to be trustworthy, and trustworthiness comes from process, not from hope. What This Means for On-Demand Software If the future I described in the first two posts arrives, and I believe it will, then SDLC practices become the trust infrastructure that makes on-demand software viable for serious use. A user generates a maintenance tracking tool. The generation pipeline runs it through schema validation to ensure it respects the data contracts from the standards layer. Automated tests verify that the core workflows function correctly. Security scanning checks for common vulnerabilities. Accessibility validation ensures it meets baseline usability standards. All of this happens in seconds, not weeks. The pipeline is automated, and the guardrails are codified. The user never sees the process. They just get software that works and keeps working. That is the vision. Not vibe coding with fingers crossed, but AI-generated software backed by the same engineering discipline that makes traditional software reliable. The SDLC is not dead. It is faster, automated, and more necessary than ever. The next post in this series steps back from the technical and asks a different question: when AI agents review each other’s work, acting as domain experts in QA, security, and architecture, is that review good enough? Or do we still need humans in the loop? The 80/20 Rule for AI Code Review Building GoPro’s Developer Program from Zero to 330 Partners --- ## Building GoPro’s Developer Program from Zero to 330 Partners URL: https://philjohnstonii.com/blog/building-gopros-developer-program-from-zero-to-330-partners Summary: How Phil Johnston built GoPro's developer program from zero to 330+ partners including NASA, Google, and BMW. Lessons on building developer ecosystems at hardware companies. Building GoPro’s Developer Program from Zero to 330 Partners When I joined GoPro in 2014, the company had no developer program, no SDK, no documentation, and no partner ecosystem. By the time the program was running, it had 330+ partner companies, a team of 10, and names like NASA, Google, BMW, and Jaguar Land Rover building on the platform. This is the story of how that happened, and what I learned about building developer programs from nothing. Starting from Zero GoPro in 2014 was a hardware company with massive consumer brand recognition and zero developer infrastructure. There was no public API, no partner portal, no technical documentation. The cameras were everywhere, but the ecosystem around them was entirely closed. The opportunity was obvious: GoPro cameras were already showing up in professional workflows at media companies, automotive manufacturers, aerospace organizations, and research labs. These organizations were hacking together their own integrations. They wanted official support, and GoPro had no mechanism to provide it. My job was to build that mechanism from scratch. The First Year: Making the Internal Case The hardest part of zero-to-one DevRel isn’t the technical work. It’s the internal alignment. At a hardware company, engineering resources are finite and already committed to the next product cycle. Convincing leadership to invest in a developer platform means showing a credible path to revenue or strategic value that justifies pulling engineers off the roadmap. I spent the first phase building the business case: which partners would move the needle, what an SDK and documentation package would cost to build, and what the addressable market looked like if we opened the platform. The key insight was framing it not as “developer relations” (which meant nothing to a hardware executive team) but as “strategic moat” in the form of partnerships enabled by technology. Same work, different language. Scaling to 330+ Partners Once the SDK shipped and the partner portal launched, growth came from a deliberate strategy of targeting high-visibility partners first. Landing NASA, Google, and BMW as early adopters did two things: it validated the platform technically (if NASA’s JPL can build on it, your startup can too) and it created a marketing flywheel that attracted the next wave of partners. The engineering team grew to 10 people supporting the ecosystem, handling everything from SDK development to partner integration support. We built the onboarding flow, the documentation, the sample code, and the partner success process that took a company from “interested” to “shipping an integration.” The program wasn’t just about partner count. It was about ensuring that each integration actually worked and delivered value. A partnership with BMW meant GoPro cameras integrated into vehicle dashboards. A partnership with a sports analytics company meant automatic highlight generation. Every integration had to solve a real problem for a real user. What I Took Away Building GoPro’s developer program taught me three things that I still apply to every engagement. First, developer programs at hardware companies are fundamentally different from software companies. You’re dealing with physical constraints, firmware update cycles, and engineering teams that think in silicon, not APIs. Second, enterprise partnerships and indie developer communities require completely different approaches. At GoPro, the value was in enterprise partnerships, not a long tail of indie developers. Knowing which model fits your company is the most important strategic decision in early-stage DevRel. Third, the zero-to-one phase is mostly about internal selling. You spend more time in slide decks and executive meetings than you do writing documentation. That’s normal, and it’s necessary. The program doesn’t exist until the organization believes it should. GoPro taught me how to build something from nothing at a company that had never invested in developers before. That pattern, making the case internally, shipping the infrastructure, landing the first partners, and scaling the program, is the same one I’ve repeated at BMW and 1Password, and the same one I bring to every fractional engagement today. Vibe Coding Got You the Prototype. Now What? Standards Are the New Moats --- ## Standards Are the New Moats URL: https://philjohnstonii.com/blog/standards-are-the-new-moats Summary: If on-demand software is the future, interoperability is its foundation. JSON Schema, OpenAPI, MCP, and llms.txt are becoming the standards that determine whether AI-generated tools can actually talk to each other. Standards Are the New Moats In my last post I argued that most business software is about to become on-demand. Users describe what they need, an LLM generates it, and the tool exists as long as they need it. If that future sounds exciting but fragile, you are paying attention. Because here is the problem nobody is talking about yet: if everyone is generating their own software, how does any of it talk to each other? The Interoperability Problem Picture this scenario. A property manager says: “Build me a tool that tracks tenant maintenance requests and feeds summaries into my accounting tool.” Both tools get generated on the fly. The maintenance tracker spins up. The accounting integration spins up. And then nothing happens, because the two tools have no agreement on what a maintenance request looks like, what fields it contains, or how to pass data between them. In the current SaaS world, this problem is solved by vendor-specific APIs and marketplace integrations. Salesforce talks to HubSpot through Zapier. Slack talks to Jira through a plugin. Each integration is a custom-built bridge between two proprietary systems. That model does not scale in a world of on-demand software. You cannot build a custom integration for every tool that gets generated on the fly. You need something more fundamental. You need shared standards for how data is shaped, how tools describe their capabilities, and how systems negotiate with each other. The Standards That Matter This is where document standards become critical infrastructure. Not the exciting kind of infrastructure that gets keynote talks at conferences. The boring, essential kind that makes everything else work. JSON Schema defines data shapes. If every generated tool agrees that a “maintenance request” has a tenant ID, a description, a priority level, and a timestamp, then any two tools can pass that object back and forth without negotiation. The schema is the contract. OpenAPI describes what an API can do. If your on-demand tool exposes its capabilities through an OpenAPI specification, any other tool (or any LLM) can discover those capabilities and interact with them programmatically. No documentation hunting. No reverse engineering. The spec is the documentation. MCP (Model Context Protocol) is doing for AI tool interoperability what USB did for hardware peripherals. An MCP server lets an AI assistant interact with your tool directly, pulling in data, executing actions, and chaining operations across multiple services. It is an early version of the universal interface contract that on-demand software needs. llms.txt is the simplest standard of the bunch, and that is exactly why it matters. A plain text file at your domain root that tells AI models what your product does, who it is for, and how to use it. Think of it as robots.txt for the AI era. Historical Parallels Every computing era has been defined by the standards that won adoption. USB replaced a dozen proprietary connectors. HTTP made the web possible. RSS made content syndication work (until social platforms killed it, but the standard itself was brilliant). TCP/IP connected networks that had no business talking to each other. The pattern is always the same. The standards that win are the ones simple enough for broad adoption. Not the most technically elegant. Not the most feature-complete. The simplest ones that solve the 80% case and get out of the way. JSON Schema is winning that race for data shapes. OpenAPI is winning it for API descriptions. MCP is early but has momentum because Anthropic, the company behind Claude, is pushing it aggressively and the developer community is responding. The on-demand software era will be built on these standards, or on whatever replaces them if they fail the simplicity test. I Am Already Using This Pattern My agent pipeline project uses interface contracts between agents. The Orchestrator hands off a task description to the PM agent. The PM agent produces a structured brief that the Architect agent consumes. The Architect produces a technical specification that the Engineering agent builds from. Each handoff works because the agents agree on the data shape. The PM does not send the Architect a blob of unstructured text and hope for the best. There is a defined schema for what a project brief contains, what a technical spec contains, and what a task assignment looks like. That is the same pattern at a different scale. When I generate a maintenance tracker and an accounting integration, they need the same kind of contract between them. Not a verbal agreement. Not a shared database. A formal schema that both tools respect. The Regulatory Angle Here is something that makes this transition faster than people expect: regulated industries already mandate document standards. Healthcare has HL7 and FHIR for patient data. Finance has FIX for trading messages and XBRL for financial reporting. Real estate, my current domain, has MISMO for mortgage data. These standards exist because regulators understood decades ago that interoperability requires formal contracts. The on-demand software era is not inventing this concept. It is extending it to every other domain that does not yet have mandated standards. The companies generating on-demand tools for healthcare will have to output FHIR-compliant data. The companies generating financial tools will have to respect XBRL schemas. Compliance is not an obstacle to on-demand software. It is a forcing function that accelerates the adoption of standards. Why This Matters Now If my argument from the last post holds, that most business software is a regenerable UI layer over commodity logic, then the moat shifts. It is no longer about the application. It is about the data format and the interface contract. The companies that own the standards will have the same structural advantage that USB-IF has over every peripheral manufacturer. They will not make the tools. They will define the contracts that make the tools interoperable. And in a world where the tools themselves are generated on the fly, that contract layer becomes the most valuable piece of the stack. The next post in this series covers the other half of the reliability question: what happens when on-demand software needs to be trustworthy enough to use twice. That is where software development lifecycle practices come in, and why vibe coding is great for prototypes but terrible for anything you need to depend on. Building GoPro’s Developer Program from Zero to 330 Partners The Last SaaS You’ll Ever Subscribe To --- ## The Last SaaS You’ll Ever Subscribe To URL: https://philjohnstonii.com/blog/the-last-saas-youll-ever-subscribe-to Summary: Most business software is a UI layer on top of well-documented rules. As LLMs get better at generating those interfaces on demand, the subscription model starts to look like a relic. This is what comes next. The Last SaaS You’ll Ever Subscribe To In September, I wrote about the future of micro-niche AI tools. The core idea was simple: instead of hunting for software that sort of fits your needs, you describe what you want and an LLM builds it for you in minutes. That post focused on personal tools. Scripts, dashboards, one-off utilities tuned to how you actually work. This post is about what happens when that same idea scales to business software. The UI Is the Product. The Logic Is Commodity. Think about the software most businesses run on. CRMs, project management tools, invoicing systems, expense trackers, onboarding checklists. Strip away the branding and the pricing page, and what you have is a user interface layer sitting on top of well-documented business processes. The intellectual property in most SaaS products is not the logic. Invoicing follows rules. Project tracking follows rules. CRM workflows follow rules. The value has always been in persistence (your data stays somewhere), integrations (it connects to your other tools), and muscle memory (your team already knows how to use it). LLMs can already generate full working applications from a description. I have watched models produce functional CRUD apps, dashboards, and form-based workflows in a single session. The gap is not capability. The gap is statefulness. A user needs an instance they can return to tomorrow, customize over time, and trust with real data. Striking an Instance The concept I keep coming back to is “striking an instance.” If you work in infrastructure, you know the idea. You spin up a container, configure it, point traffic at it, and it runs. When you are done, you tear it down. Now imagine that for a non-technical user. You describe what you need: “I want a tool that tracks tenant maintenance requests, lets me assign them to contractors, and sends me a weekly summary.” An LLM generates the application. It spins up with your data, your preferences, your integrations. You use it until you do not need it anymore, or you keep it running and iterate on it over time. This is not theoretical. I am already building this way. My property management system started as Markdown files in a git repository, and I am layering a FastAPI and HTMX interface on top of it. The logic is simple. The data format is portable. The interface can be regenerated or redesigned at any point because the underlying structure is clean and well-documented. That is on-demand software. Not an app you download. Not a subscription you pay monthly for. A tool that exists because you described it. Proof It Already Works Earlier this year I replaced an entire SaaS workflow with a vibe-coded TUI built in an afternoon. I was using Khoros for community management and needed a specific view of the data that their product did not offer. Instead of filing a feature request or shopping for a different vendor, I asked an LLM to build me a terminal-based reader that pulled exactly the data I needed and displayed it exactly the way I wanted. It took one afternoon. Not one sprint. Not one procurement cycle. One afternoon. Now scale that idea. Expense reports. Lease tracking. Onboarding checklists for new employees. Inventory management for a small warehouse. Every one of these is a well-documented process that an LLM can turn into a working tool if you give it the right description and a clean data layer underneath. The SaaS Pricing Model Breaks Down Here is the part that should make SaaS executives uncomfortable. The traditional model charges you monthly for access to software that does not change much between updates. You pay for the privilege of using someone else’s UI on top of logic that is fundamentally commodity. When a user can regenerate the tool, the pricing model collapses. Why pay $49 per seat per month for a project management tool when you can describe what you need and have a working version in 20 minutes? The answer today is still persistence, integrations, and trust. But those barriers are eroding fast. Persistence is a solved problem. Databases are cheap and plentiful. Integrations are moving toward standard protocols (more on this in my next post). Trust is the last real moat, and it is a function of reliability and security, both of which can be engineered into a generation pipeline. The Counterarguments Are Real but Solvable I am not going to pretend this transition is frictionless. Vendor lock-in is a genuine concern. Enterprises have decades of data gravity pulling them toward existing platforms. Compliance requirements in healthcare, finance, and real estate mean you cannot just spin up software and hope for the best. But these are engineering problems, not fundamental barriers. Data portability standards exist. Compliance frameworks can be codified into generation templates. The same LLMs that build the software can also validate it against regulatory requirements before it ever touches production data. The real question is not whether on-demand software will replace SaaS. It is how quickly the tooling around it matures to handle the edge cases that enterprises care about. Standards, interoperability, and software development lifecycle practices will determine the pace. Those are topics I will cover in the next two posts in this series. What This Means for You If you are a founder building a SaaS product, the question to ask yourself is: what part of my product cannot be regenerated by an LLM? If the answer is mostly the data layer and the integrations, your moat is thinner than you think. If you are a developer building tools for yourself or your team, you are already living in this future. Every script you generate, every dashboard you spin up, every workflow you automate with a prompt is on-demand software in its earliest form. And if you are a user who has ever been frustrated by software that almost does what you need, the future looks like this: you describe what you want, and the tool exists. Not a tool that was designed for a million users and sort of works for your case. A tool that was designed for you. That is the trajectory I see. Not the death of all SaaS overnight, but a steady erosion of the moats that justified subscription pricing in the first place. The companies that survive will be the ones that own the data layer, the integration standards, or the trust infrastructure. Everyone else is selling a UI that the user can rebuild themselves. Standards Are the New Moats What Running DevRel at 1Password Taught Me About Open Source and Developer Trust --- ## What Running DevRel at 1Password Taught Me About Open Source and Developer Trust URL: https://philjohnstonii.com/blog/what-running-devrel-at-1password-taught-me-about-open-source-and-developer-trust Summary: Lessons from building DevRel at 1Password: managing 900+ open source projects, scaling technical content, and why trust is the product for security-focused developer tools. What Running DevRel at 1Password Taught Me About Open Source and Developer Trust 1Password is a product that developers already love. That’s a rare starting position for a DevRel program. Most companies have to convince developers to care. At 1Password, the challenge was different: how do you build a formal developer relations function around a product that already has organic developer affinity, without breaking the trust that earned it? The Starting Point When I came in, 1Password had over 900 open source projects in its ecosystem but no unified DevRel vision tying them together. There was developer interest, technical content being produced in pockets, and a strong engineering brand. What was missing was a strategy connecting all of it. My role was to establish that strategy: define what developer relations meant for 1Password, build the content engine, manage the open source program, and serve as the company’s developer advocate across channels. Open Source at Scale Managing 900+ open source projects is a fundamentally different challenge than running a handful of flagship repos. At that scale, the questions shift from “how do we maintain this project” to “how do we maintain a coherent developer experience across hundreds of touchpoints.” The approach I took was to focus on the developer’s perspective rather than the project’s perspective. A developer interacting with 1Password’s open source ecosystem doesn’t think in terms of individual repos. They think in terms of tasks: “I want to integrate 1Password into my CI/CD pipeline,” “I want to manage secrets in my Kubernetes cluster,” “I want to add password management to my app.” Organizing the open source program around those developer journeys, rather than around project maintenance, changed how we prioritized work, wrote documentation, and communicated with the community. Content as Developer Advocacy At 1Password, I produced and directed technical content at scale, contributing to my career total of over 123 developer tutorial videos. But the lesson I took away wasn’t about volume. It was about the role content plays in a trust-driven product. Security products live and die on trust. Developers won’t adopt a password manager, a secrets management tool, or an authentication SDK unless they trust the company behind it. Traditional marketing erodes that trust. Technical content builds it. The content strategy at 1Password was built around demonstrating competence rather than making claims. Show a developer exactly how the CLI works. Walk through the architecture of the secrets management system. Explain the encryption model in detail. Every piece of content was an opportunity to prove that the people behind the product understand the developer’s world. This approach to content, lead with technical depth rather than marketing polish, is something I now bring to every engagement. It’s especially relevant for API-first companies where the buyer and the user are often the same technical person. Cross-Functional GTM One of the more valuable parts of the 1Password experience was leading cross-functional go-to-market efforts. Developer relations doesn’t operate in a vacuum. Launches involve product, engineering, marketing, sales, and support, and someone needs to be the connective tissue between the developer’s perspective and the rest of the organization. At 1Password, I served that function: translating developer needs into product priorities, coordinating launch timelines across teams, and ensuring that the developer experience was considered in every GTM decision. This is a skill that doesn’t show up in most DevRel job descriptions but determines whether a DevRel program actually influences product direction or just produces content on the side. Three Lessons from 1Password Trust is the product. For security-focused developer tools, every interaction is either building or eroding trust. Content, documentation, community engagement, and open source contributions all need to be measured against that standard. Open source at scale requires journey-based thinking. When you have hundreds of projects, organizing around developer tasks rather than individual repos makes the ecosystem navigable and the priorities clear. DevRel is cross-functional or it’s just marketing. The value of developer relations comes from influencing product decisions, not just producing content. If DevRel doesn’t have a seat at the GTM table, it’s operating at a fraction of its potential. 1Password was the most recent chapter in a career spent building developer programs at companies that hadn’t had them before. Each one, BMW, GoPro, HERE Technologies, and 1Password, taught me something different. Together, they form the foundation of the fractional DevRel practice I run today. The Last SaaS You’ll Ever Subscribe To Your Developer Tool Is Invisible to AI. That’s a Problem. --- ## Your Developer Tool Is Invisible to AI. That’s a Problem. URL: https://philjohnstonii.com/blog/your-developer-tool-is-invisible-to-ai-thats-a-problem Summary: AI tools are reshaping how developers discover and evaluate APIs and SDKs. Learn what makes a developer tool visible to LLMs, why most companies are behind, and tactical steps to fix it including llms.txt, JSON-LD, MCP servers, and content strategy. Your Developer Tool Is Invisible to AI. That’s a Problem. In 2024, a developer evaluating a new library would Google it, scan the docs, maybe check a Reddit thread. In 2026, a growing number of developers skip all of that. They ask an AI assistant. “What’s the best authentication SDK for a Next.js app?” “Find me a geocoding API with good free tier limits.” “What tools should I use for secrets management in Kubernetes?” If your product doesn’t show up in those answers, you’re losing evaluations you’ll never know about. There’s no rejection email. No “we went with a competitor” notification. The AI simply never mentioned you, and the developer moved on. This is the new developer discovery problem, and most DevRel teams haven’t caught up to it yet. How Developers Actually Find Tools Now The shift didn’t happen overnight, but it has accelerated fast. Developers now use AI assistants at multiple points in their workflow: researching options, debugging integrations, generating boilerplate, and building prototypes. Each of those moments is a discovery opportunity, or a missed one. Here’s what changed. Traditional developer discovery was a funnel you could influence with conference talks, SEO, and content marketing. AI-assisted discovery is a black box. An LLM either knows about your product or it doesn’t. It either recommends you or it recommends your competitor. And unlike a Google result where you can at least buy an ad, there’s no paid placement in a ChatGPT response(yet). The companies winning developer attention right now are the ones treating AI discoverability as a first-class concern, not a nice-to-have they’ll get to after the next product launch. What Actually Makes a Product Visible to AI After spending 15 years building developer programs and the last two years specifically focused on AI discovery, I’ve identified the layers that determine whether an LLM recommends your tool or ignores it. Structured identity. AI tools build entity graphs to understand who makes what. If your company, product, and founder information is scattered across inconsistent naming, fragmented profiles, and missing structured data, LLMs can’t connect the dots. Schema.org markup (JSON-LD), consistent naming across platforms, and clear product-to-company attribution all matter. This is the foundation layer, and most startups get it wrong because they think of it as “just SEO.” LLM-readable content. The llms.txt standard is emerging as a way to give AI crawlers a clean summary of what your product does, who it’s for, and how to use it. Think of it as robots.txt for the AI era. A well-structured llms.txt file, combined with documentation that’s written in plain, factual language (not marketing copy), dramatically increases the chances that an LLM accurately represents your product. But here’s the part people miss: LLMs don’t just read your marketing page. They synthesize information from documentation, Stack Overflow answers, GitHub READMEs, blog posts, forum discussions, and tutorial content. The breadth and consistency of your content footprint matters as much as any single page. Tool integration. The Model Context Protocol (MCP) is changing the game for developer tools. An MCP server lets AI coding assistants interact with your API directly, pulling in docs, running queries, or generating integration code in real time. Companies that ship MCP servers are showing up in developer workflows at the exact moment of decision. It’s the difference between “the AI mentioned us” and “the AI used us.” Content signal depth. A single landing page won’t do it. AI tools weight detailed, specific, problem-solving content far more than feature lists. A blog post titled “How to build a delivery route optimizer with [your API]” gives an LLM something concrete to reference. A landing page that says “powerful geolocation APIs” gives it nothing useful. This is where DevRel and AI discoverability merge. The same content that helps developers adopt your product (tutorials, guides, case studies) is exactly the content that teaches AI tools to recommend it. Why Most Developer Tool Companies Are Behind First, the playbook hasn’t been written yet. Traditional DevRel has decades of institutional knowledge around conferences, documentation, community management, and developer marketing. AI discoverability has maybe 18 months of serious practice behind it. Most DevRel leaders are still running the 2022 playbook in a 2026 landscape. Second, it’s cross-functional in a way that’s uncomfortable. AI discoverability touches engineering (MCP servers, API design), marketing (structured data, content strategy), product (documentation, onboarding flows), and DevRel (tutorials, community content). No single team owns it, which means nobody prioritizes it. Third, the feedback loop is invisible. If your conference booth gets no traffic, you see it immediately. If an LLM stops recommending you, there’s no dashboard that shows the decline. The only signal is a gradual plateau in organic signups that nobody can explain. What to Do About It If you’re a technical founder or CTO at a developer tool company, here’s where I’d start. Audit your AI presence. Ask ChatGPT, Claude, and Perplexity to recommend tools in your category. If you don’t show up, or if the description is inaccurate, you have a problem. Then ask follow-up questions: “Tell me about [your product].” If the AI gives vague or outdated information, your content footprint is too thin. Implement structured data. At minimum, add JSON-LD Person and Product/SoftwareApplication schema to your site. This helps AI tools build accurate entity graphs. It takes a few hours and costs nothing. Create an llms.txt file. Publish a clean, factual summary of your product at yourdomain.com/llms.txt. Include what the product does, who it’s for, key features, pricing model, and links to documentation. Keep it straightforward. LLMs parse facts better than marketing language. Write problem-first content. Shift your content strategy from “features we ship” to “problems developers solve.” Every tutorial, blog post, and guide should be framed around a specific developer task. This creates the kind of content that LLMs can cite when a developer asks “how do I do X?” Build an MCP server. If you have an API, shipping an MCP server should be on your near-term roadmap. It puts your product inside the AI coding assistant workflow, which is increasingly where adoption decisions happen. Audit your identity coherence. Make sure your company name, product name, and founder names are consistent across your website, LinkedIn, GitHub, social media, and any developer directories. Fragmented identity is one of the most common reasons AI tools give confused or incomplete answers about a product. The Bigger Picture AI discoverability isn’t a separate discipline from developer relations. It’s where DevRel is heading. The skills that have always mattered in DevRel, understanding developer needs, reducing friction, creating useful content, thinking in adoption funnels, are exactly the skills needed to make a product visible to AI tools. The difference is that the audience now includes machines as well as humans. Documentation that’s clear enough for a junior developer to follow is also clear enough for an LLM to parse. Tutorials that solve real problems get cited by AI assistants. Structured data that helps Google also helps ChatGPT. The companies that figure this out early will have a compounding advantage. Every piece of content, every MCP integration, every structured data improvement makes them more visible to AI tools, which drives more adoption, which generates more community content, which makes them even more visible. The flywheel effect is real, and it’s already spinning for the companies paying attention. The ones that wait will wonder why their organic growth flatlined while their competitors seem to be everywhere. What Running DevRel at 1Password Taught Me About Open Source and Developer Trust 15,000 Developers During a Pandemic: What I Learned at HERE Technologies --- ## 15,000 Developers During a Pandemic: What I Learned at HERE Technologies URL: https://philjohnstonii.com/blog/iuqdpdkpo11dzgygsoz7j7cbiwq9zx Summary: How Phil Johnston onboarded 15,000 developers at HERE Technologies during the pandemic using digital-first DevRel, then expanded the cloud marketplace by 2,250x as product manager. 15,000 Developers During a Pandemic: What I Learned at HERE Technologies In March 2020, every developer conference on our calendar disappeared overnight. I was leading developer relations technical content marketing at HERE Technologies, a location intelligence platform, and suddenly the entire playbook was gone. No booths, no talks, no hallway conversations, no hackathons. By the end of that year, we had onboarded 15,000 new developers. Here’s how, and what the numbers actually looked like. HERE Technologies provides location APIs and SDKs used by companies building mapping, logistics, and navigation products. When I joined in 2019, developer relations was event-heavy. Conferences, workshops, and in-person hackathons were the primary acquisition channels. When COVID shut everything down, we didn’t have the luxury of waiting. The platform had ambitious growth targets, and “let’s pause until events come back” wasn’t an option. I had also recently joined the company to implement a content marketing program designed to drive awareness and adoption of the developer offerings. We rebuilt the entire developer acquisition strategy around three channels that didn’t require anyone to be in the same room. Tutorial content at scale. We produced written and video tutorials targeting specific use cases: fleet tracking, store locators, route optimization, delivery logistics. Each tutorial was designed to get a developer from zero to a working prototype in under 30 minutes. The key was specificity. “How to build a delivery route optimizer with HERE APIs” performs better than “Getting started with HERE” because it matches what developers actually search for. Community engagement. We shifted from hosting events to participating in existing online communities. Stack Overflow, Reddit, Discord servers, and developer forums became the primary touchpoints. Instead of waiting for developers to come to our booth, we went where they already were. Media and content partnerships. We produced an Open Source Covid map, that required 1 thing, a sign up for a HERE Developer Account. This lead to 100+ media articles through technical content partnerships. These weren’t press releases. They were actual news pieces about Covid, but all contained our Covid Map visualizing in near realtime the data that media audiences craved. In addition to the Covid Map, we embedded branding and a link to the Open Source project to create a map if the viewer wished to. This practical, approach to content marketing got the HERE Developer program infront of millions of eyeballs and a fraction of them were even curious developers. We litterally drove developer traffic to the HERE Developer portal with technical content marketing maps. The developers who came through the digital channels were, on average, more qualified than conference leads. They had already read an article and were looking to build, before they ever knew about HERE. The onboarding funnel was more efficient because the content did the pre-qualification work that a booth conversation used to do. From DevRel to Product: The 2,250x Story In 2021, I transitioned from developer relations into product management for HERE’s cloud marketplace platform. This is where things got interesting from a business strategy perspective. The marketplace was serving a narrow vertical with limited growth potential. By applying the same developer-centric thinking to product strategy, we identified adjacent markets that the platform could serve with minimal technical changes. The result was a 2,250x expansion of the serviceable obtainable market. That number sounds impossible until you understand the math. The original market was deliberately narrow. Expanding it meant repositioning the platform’s value proposition and opening it to categories that had been considered out of scope. The technology didn’t change much. The strategy changed entirely. This experience shaped how I think about DevRel and product management as complementary disciplines. Developer relations generates the insights about what developers need. Product management turns those insights into platform decisions. When the same person holds both perspectives, the feedback loop tightens dramatically. What This Taught Me The pandemic forced a natural experiment that most DevRel teams would never have run voluntarily. It proved three things. Digital-first developer acquisition can outperform events. Not because events are bad, but because content scales in ways that conference booths cannot. A tutorial published in 2020 is still generating signups today. A conference booth generates leads for three days. Specificity wins. Developers don’t search for your product name. They search for their problem. Content that matches the problem (“how to track a delivery fleet in real time”) outperforms content that matches your product (“HERE Technologies API tutorial”). DevRel and product management are the same muscle. The skills that make someone good at developer relations (understanding developer needs, reducing friction, thinking in adoption funnels) are the same skills that make someone good at developer-focused product management. The HERE experience proved that to me firsthand. Your Developer Tool Is Invisible to AI. That’s a Problem. How I Help API-First Companies Build Developer Programs That Actually Work --- ## How I Help API-First Companies Build Developer Programs That Actually Work URL: https://philjohnstonii.com/blog/how-i-help-api-first-companies-build-developer-programs-that-actually-work Summary: Phil Johnston is a fractional Head of Developer Relations helping API-first companies build developer programs, partnerships, and AI discoverability. 15 years of experience at GoPro, BMW, HERE Technologies, and 1Password. How I Help API-First Companies Build Developer Programs That Actually Work Most companies know they need a developer program. Fewer know how to build one. And almost none want to wait 18 months for a full-time hire to figure it out. That’s where I come in. I’m Phil Johnston, and I run a fractional developer relations practice focused on API-first companies that need to go from “we should probably do something for developers” to a functioning program with measurable results, fast. The Problem I Keep Seeing Here’s a pattern I’ve watched play out a dozen times: a company ships a solid API or SDK, gets some organic traction, and then realizes they need developer relations. So they open a job req. Six months later they’ve either hired someone junior who’s learning on the job, or they’ve burned through a senior candidate’s patience with a process that took too long. Meanwhile, their competitors launched an MCP server, published a getting-started tutorial, and showed up in the first three results when a CTO asked Claude or ChatGPT for a recommendation. The window for developer mindshare is shrinking. AI tools are reshaping how developers discover, evaluate, and adopt new platforms. If your product isn’t visible to LLMs and coding assistants, you’re invisible to a growing share of your audience. What I Actually Do I operate as a fractional Head of Developer Relations. That means I embed with your team part-time and run DevRel the same way I would as a full-time leader, just scoped to what matters most right now. In practice, that usually covers four areas. Developer onboarding and experience. I audit your docs, quickstart guides, and time-to-first-integration. Then I fix the gaps. At HERE Technologies, I rebuilt the developer onboarding flow and brought in 15,000 new developers during 2020, when every conference and in-person event had shut down. The answer wasn’t “wait for events to come back.” It was tutorials, video content, community engagement, and a relentless focus on reducing friction. Program strategy and partnerships. At GoPro, I built the Developer Program from zero. No existing partners, no SDK, no documentation. Within a few years it had 330+ partner companies, including NASA, Google, BMW, and Jaguar Land Rover, with a team of 10 engineers supporting the ecosystem. At BMW, I led the application integration program that brought Audible, Pandora, and other major brands into the connected car. These weren’t marketing exercises. They were engineering programs with real technical integrations. AI discoverability and tooling. This is the part most DevRel teams haven’t caught up to yet. I help companies become visible to AI assistants, coding tools, and agent frameworks. That means structured data, llms.txt files, MCP server implementations, prompt-friendly documentation, and content strategies designed for how developers actually find tools in 2026. I’ve built RAG-based tutorial generation pipelines and custom social monitoring tools. I hold a Prompt Engineering certification from the team behind Meta’s Llama project. This isn’t a bolt-on service; it’s woven into everything I do. Content and advocacy at scale. Across my career, I’ve produced over 123 developer tutorial videos, written technical documentation for multiple platforms, and served as the on-camera advocate for brands like GoPro and 1Password. I understand the full content pipeline from planning to production to distribution, and I know which content actually moves developers from “browsing the docs” to “shipping in production.” The honest answer: most companies at the Series A to Series C stage don’t need a full-time VP of DevRel. They need someone who’s done it before, can stand up the program, set the metrics, create the content engine, and either hand it off to a full-time hire or continue running it on a part-time basis. I’ve built developer programs from scratch three times at companies that had never had one: BMW, GoPro, and 1Password. At each one, the challenge was the same. Make the case internally, define what success looks like, hire the right people, and ship results before anyone loses patience. Fractional lets me do that for more than one company at a time, and it lets you move faster than a traditional hiring process allows. Here’s what I think most DevRel leaders are underestimating: the shift in how developers discover tools. Five years ago, developer discovery was Google, Stack Overflow, Hacker News, and word of mouth. Today, a meaningful and growing percentage of developers ask an AI assistant first. They paste an error into Claude. They ask ChatGPT for library recommendations. They use Cursor or Windsurf with MCP servers that pull in documentation automatically. If your API isn’t visible in those contexts, you’re losing evaluations you never even knew were happening. No one sends you a “we evaluated your product and passed” email when the AI assistant simply never mentioned you. At HERE Technologies, I took a cloud API marketplace from a narrow vertical to a 2,250x expansion of its serviceable obtainable market by combining developer experience strategy with product management rigor. That same kind of thinking applies to AI discoverability today. It’s not just about SEO anymore. It’s about whether your product shows up when an AI agent is building a solution for someone. Who This Is For I work best with API-first companies, developer tool startups, and platform teams that have a working product but haven’t yet built the developer go-to-market motion around it. If you have an API and you’re wondering why adoption is flat, or if you’ve heard about llms.txt and MCP servers but don’t know where to start, that’s my wheelhouse. I’m based in Portland, Oregon, and I’ve been remote-first since 2019. I’ve worked across automotive (BMW), consumer hardware (GoPro), location intelligence (HERE Technologies), and cybersecurity (1Password). The common thread across all of them: building ecosystems that turn a product into a platform. If any of this resonates, I’d like to hear about what you’re building. You can reach me through the contact form on my site. 15,000 Developers During a Pandemic: What I Learned at HERE Technologies “Phil Johnston LinkedIn” (Leaving LinkedIn and Choosing Independence) --- ## “Phil Johnston LinkedIn” (Leaving LinkedIn and Choosing Independence) URL: https://philjohnstonii.com/blog/phil-johnston-linkedin-leaving-linkedin-and-choosing-independence Summary: After my LinkedIn account was compromised and permanently removed, I lost nearly two decades of professional connections. This is about what happens when a single platform becomes too central to your professional identity. “Phil Johnston LinkedIn” (Leaving LinkedIn and Choosing Independence) This is not a post about being angry at LinkedIn. It’s a post about what happens when a single platform becomes too central to your professional identity, and what it feels like when that platform is suddenly gone. My LinkedIn account was compromised. Whoever gained access used my account to reach out to people in my network and attempt to recruit them for a startup. That activity violated LinkedIn’s terms of service, and once that line was crossed, the account was unrecoverable. I worked with LinkedIn support and understood their decision. From their perspective, an account that has been used to harm others cannot simply be reset and trusted again. That doesn’t make the outcome any less final. I joined LinkedIn very early, around 2007 or 2008. Over nearly two decades, I built my professional presence there. Conversations, opportunities, long-running relationships, and roughly 2,000+ contacts accumulated slowly over time. It wasn’t just a list of names. It was a record of a career. And then it was gone. The experience forced me to confront something I had largely taken for granted: when your professional life is anchored to a single, centralized platform, you don’t really own it. You’re borrowing it. And when access disappears, so does everything attached to it. To be clear, this isn’t a moral judgment of LinkedIn. I understand why they enforce their policies the way they do. Abuse prevention requires hard lines. But understanding the reasoning doesn’t change the reality that a compromise, even a brief one, can erase decades of accumulated social capital. There’s also a practical lesson here. If you are still using LinkedIn, enable multi-factor authentication. Change your password regularly. I had recently changed mine. I had not enabled MFA. That gap was enough. Security failures are rarely dramatic. They are usually small omissions that compound. After losing access, I considered starting over. Creating a new account, rebuilding from scratch, re-adding people where possible. But that idea never sat quite right with me. Not because LinkedIn is bad, but because the fragility of the situation was suddenly obvious. If I rebuilt everything in the same place, I’d still be exposed to the same single point of failure. Instead, I decided to step away entirely. Going forward, this website is my primary professional home. It’s not optimized for engagement or growth hacks. It’s not governed by changing platform rules. It’s simply a place where my work, my ideas, projects and my contact information live under my control. If you want to reach me, the contact form here is the best option. I check it reliably. I’m also present on Mastodon and Bluesky, though I don’t check them as frequently. Think of those as ambient presence, not an inbox. This whole experience also made me think more broadly about the shape of professional networks. We’ve already seen that social conversation doesn’t need to live inside a single corporate platform. Mastodon proved that distributed systems can work. There’s no obvious technical reason the same idea couldn’t apply to professional identity and relationships. When one company holds decades of your career graph, losing access becomes catastrophic. Portability, federation, and ownership matter more than we like to admit, right up until the moment they’re gone. I’m grateful for what LinkedIn enabled for me over the years. It genuinely mattered. This isn’t a rejection of that history. It’s simply an acknowledgment that it’s time for something more resilient. If you’re reading this and trying to find me, you already have. How I Help API-First Companies Build Developer Programs That Actually Work The Future of Micro-Niche AI Tools --- ## The Future of Micro-Niche AI Tools URL: https://philjohnstonii.com/blog/future-of-micro-niche-ai-tools Summary: As someone who’s soon to be less relevant when it comes to technology, I am personally augmenting my workflows to include self-developed tools to help me get my work done. One recent example is a quick script that will test server The Future of Micro-Niche AI Tools Sandcastles at Cannon Beach, Oregon We’re just scratching the surface of what AI can do when it comes to creating personalized, on-demand tools. Today, models can generate one-off scripts for data analysis, or automate a few repetitive tasks. But I see a near future where this idea extends far beyond single-use snippets. From Code Generation to Tool Creation As someone who’s soon to be less relevant when it comes to technology, I am personally augmenting my workflows to include self-developed tools to help me get my work done.  One recent example is a quick script that will test server latency.  Instead of going out and finding a full-fledged tool on the internet where I may or may not have to sign up for a service, I asked an LLM (Gemini CLI) in this case to create a script that would test Latency.  Within minutes I had a script that would clear caches, wait a random amount of time between requests, measure time and generate a human readable report that could be acted upon immediately. Right now, AI creates code when you ask for it. For example: A Python script to analyze CSV files. A SQL query to answer a data question. But the real shift will be when AI begins to generate not just snippets, but entire micro-tools designed around you. These tools will feel more like apps than scripts, and they’ll reflect your personal context. Examples Across Domains Here’s what I mean by “micro-niche” in practice: A script that makes a basic collage A personalized Triptych/Diptych creator that adapts to your style and past work A script that runs a regression A full-fledged analysis dashboard tuned to your knowledge level, guiding tone and language A quiz generator An adaptive learning path that responds to your mental energy and comprehension in real time Many of these tools are one-off, but imagine if these tools entered into a toolbox of on-demand tools that have been curated to your needs, personalized to your level of knowledge and tweaked so that the output that is returned is optimized for your learning style. This is huge for humans and a new form of tool usage. How Society Views AI Today Public sentiment toward AI is mixed—and trending cautious. Concern outweighs excitement : Surveys show 52% of U.S. adults are concerned about AI, while just 10% are excited ( Pew Research ). Workplace unease : While 82% of firms are expanding AI use, 70% of employees feel uneasy about AI managers ( Investopedia ). “AI shame” has emerged as people hesitate to use AI tools without formal support ( Times of India ). Personalization as the Breakthrough The key to adoption and breaking through the societal anxiety around AI is personalization at scale. AI won’t just answer your question. It will know how you like answers to be structured. It won’t just generate code. It will generate tools that look, feel, and adapt to your workflow. It won’t just output text. It will tune tone, depth, and complexity based on your current mental state. Think of it like a new layer of software creation: instead of going to an app store, you’ll have tools generated in real time, just for you. Do you really need a wallpaper app, or a fart sound app, or even an app to track your water intake.  The only benefit that I see from most apps these days is that they are in your pocket. What about all your AI scripts in your pocket, collecting relevant information for you and presenting it to you when it makes sense. Personalization and access is key.  We are almost there, but the reality of it is unevenly distributed. Or as - William Gibson would say, “The future is already here – it's just not evenly distributed.” Why This Matters Accessibility : People who don’t know data science, design, or code will be able to use professional-grade tools. Speed : No more hunting for plugins or software updates. The right tool exists when you need it. Creativity : Photographers, writers, and makers will have personalized assistants that know their style history and help push it forward. Trust : People will only embrace these tools if they feel safe, human-centered, and under their control. Critical thinking : Micro-niche tools should enhance human insight, not replace it. Micro-niche AI tools won’t replace human creativity. They’ll act more like invisible co-developers, unlocking human creativity. I have experienced this.  Cool “apps” that are really just features have been kicking around in my mind for years, maybe even decades. Only until recently have they just been ideas, now I can throw an LLM at the idea and have it prototyped and iterated a few times before lunch. Instead of a future where everyone uses the same software, I see a future where everyone has their own ideas—built for them in the form of software, by AI, in real time. “Phil Johnston LinkedIn” (Leaving LinkedIn and Choosing Independence) Encouraging Developers to Share Their Stories in Their Own Words --- ## Encouraging Developers to Share Their Stories in Their Own Words URL: https://philjohnstonii.com/blog/encouraging-developers-to-share-their-stories Summary: One of the most effective DevRel tactics is personally inviting developers to write about their projects in their own words. Here are the strategies and lessons learned from building a developer storytelling program. Encouraging Developers to Share Their Stories in Their Own Words As a developer relations professional, one of our goals is to highlight the voices of the developers we work with. Recently, I’ve been focusing on inviting developers to share their projects in their own words. Here are some of the tactics and lessons learned along the way. Personal Outreach: A Key Tactic One of the main strategies I’ve used is reaching out “individually” to developers who share interesting projects in our forum or Slack channel. I personally invite them to write a post about their project in their own words. This personal touch has been very effective—I’ve had a 100% response rate from the developers I’ve approached so far. Leveraging Ghostwriting and Reframing Another lesson learned is that sometimes developers are more comfortable if you help them reframe or ghostwrite their initial content. For instance, I took one developer’s lessons learned and turned it into a tutorial, then let them share it themselves with minimal extra work on their part. This approach made it easier for them to participate. Cross-Promotion Through Other Outlets To boost visibility, I’ve also been highlighting these posts across other media outlets, like our newsletter. While we’re still building broader community engagement, this cross-promotion helps get more eyes on the content and encourages more interaction over time. Why Developers’ Own Voices Matter It’s important for developers to share in their own words because it adds authenticity and helps build a more genuine community. When developers tell their own stories, it resonates more deeply with their peers and can inspire others to share as well. How do you encourage developers who are shy or less confident about sharing their work? We focus on creating a supportive environment and offer to help them shape their story so they feel more comfortable sharing. What if you don’t have time to individually reach out to every developer? Start small and focus on a few key contributors, or consider setting up a process where community members can nominate each other to share. How do you measure the success of this approach? We look at engagement over time, track how many developers participate, and see how the overall community engagement grows with these authentic stories. The Future of Micro-Niche AI Tools Micro-niche vibe coding? --- ## Micro-niche vibe coding? URL: https://philjohnstonii.com/blog/micro-niche-vibe-coding Summary: I've been building small terminal UI apps tuned to very specific personal workflows. It's vibe coding at its most practical: making the tools to be more productive, one micro-niche problem at a time. Micro-niche vibe coding? I’ve been vibing lately with terminal UI (TUI) apps. Small personal tools that help me optimize a very specific task or workflow. I call these "Micro-Niche" tools. I can spin these tools up in an afternoon, where before I would have just kept plodding along without them. The nn'th project that sparked this Over the years I continue to start and stop building these small projects that would benefit my life, but eventually give up due to either getting stuck, running out of time or simply getting distracted. The first one that really has improved my workflow has been a TUI tool that allows me to keep up with a community I manage. Basically we have a website that doesn’t really offer an RSS feed in a way that helps me, so using Cursor and a GraphQL endpoint, I was able to write a little TUI reader application that pulls in the most recent messages, presents them to me, allows me to filter out for keywords and then will generate summaries of the threads. TUI Reader that helps me in my day job. If you are interested you can find it here, but that’s not the point of this article. Not exactly something you’d find on Hacker News’ front page, but for me it’s a perfect example of a personal workflow enhancement with custom tooling. Workflow + Quake Visor Mode I run my app inside iTerm2’s Quake Visor mode with a quick key combo and the TUI drops down like a heads-up display. Read, scan, pop it away. It feels frictionless. Before? I’d either avoid the task altogether or waste time flipping between tabs and copy/pasting into Notion, Gemini, ChatGPT or a text editor. The overhead sucked and took me forever to manage. Why this works for me What I love here is not the specific tool, but the vibe: It’s lightweight. It’s built for me, not a broad audience. It exists because AI made the “too much effort” barrier vanish. Years ago, I’d have sketched this out and said “cool idea, let’s build it, but I don’t know Python, so let me start there...” Now I can just let AI scaffold it for me, iterate a few times, get something working and start using it immediately. Then I iterate for a few more days and really insert it into my daily workflow. TUIs are underrated in 2025. With a little AI assist, they’ve become my go-to for building **flow tools**: helpers that don’t try to be polished apps, but instead quietly optimize my workflow. Do you have some tools you've Vibe-coded to share? I’m curious, do you have any personal apps have you created to improve your flow? TUIs, scripts, automations, weird one-offs? Encouraging Developers to Share Their Stories in Their Own Words --- ## Phil Johnston, II URL: https://philjohnstonii.com/resume Summary: Resume of Phil Johnston, II. 15 years in Developer Relations at BMW, GoPro, HERE Technologies, and 1Password. Built the GoPro Developer Program to 330+ partners including NASA and Google. Onboarded 15,000+ developers. Based in Portland, OR. Phil Johnston, II I've spent 15 years helping developers succeed on platforms most people only read about in press releases. At GoPro, I built the Developer Program from zero to 330 partner companies, including NASA, Google, BMW, and Jaguar Land Rover, managing a team of 10 engineers to make it happen. At BMW, I led the application integration program that brought Audible, Pandora, and other household names into the connected car. At HERE Technologies, I onboarded 15,000 developers during a global pandemic and grew a cloud marketplace's addressable market by 2,250x. Most recently at 1Password, I established the company's developer relations vision across open and closed source, led cross-functional GTM, and served as chief developer advocate and content producer. My work sits at the intersection of developer experience, technical product strategy, and community building. I think in terms of time-to-first-integration, developer adoption curves, and the content that actually moves people from "browsing the docs" to "shipping in production." I've built RAG-powered tutorial generation systems, custom social media monitoring tools, and the kind of developer marketing programs that produce 123 videos and measurable onboarding results. I hold a BS in Computer Science from the University of Portland, where I was an Entrepreneur Scholar. I'm also certified in Prompt Engineering through Dr. Elvis Savaria's program (Facebook Llama Project). Portland, OR. Remote-first since 2019. Career Timeline 2024-2026 - Head of Developer Relations, 1Password (Agilebits) 2021-2023 - Dir. Product, Cloud Marketplace Platform, HERE Technologies 2019-2021 - Sr. Program Manager, Developer Relations, HERE Technologies 2017-2019 - Developer Relations Consultant, Jaguar Land Rover 2014-2017 - Director, GoPro Developer Relations 2011-2014 - Head of BMW Group Developer Relations 2008-2011 - Co-Founder ORCA Digital Solutions 2008 - BS Computer Science, University of Portland (Entrepreneur Scholar) Leadership: Team building (0→team of 10), remote management since 2019, Cross-functional GTM, enterprise and open source partnerships AI / ML: AI-SDLC, agentic coding process and principles, RAG content pipelines, prompt engineering, LLM Agentic coding, AI agent frameworks, AI-powered content systems, Developer Relations: AI-optimization, zero-to-one program building, community growth, developer advocacy, technical storytelling, API/SDK evangelism Product: API marketplace management, developer experience (DX), cloud platform GTM, KPI definition, roadmap strategy Content: Video production (123+ tutorials), technical writing, developer marketing at scale, YouTube channel strategy What Sets Me Apart Zero-to-One DevRel Builder Three times I've built developer relations programs from scratch at companies that had never had one: BMW, GoPro, and 1Password. I know how to make the case internally, hire the team, define the metrics, and ship results before anyone loses patience. Enterprise Partnership DNA My developer programs haven't just attracted hobbyists. I've built ecosystems that brought in NASA, Google, BMW, Jaguar Land Rover, JPL, and Hot Wheels. I understand how to design programs that serve both indie builders and Fortune 500 engineering teams. 2,250x Market Growth At HERE Technologies, I took a cloud API marketplace from a narrow segment to a 2,250x expansion of its serviceable obtainable market. That's not a typo. That's what happens when developer experience strategy meets product management rigor. 15,000 Developers Onboarded During a Pandemic When COVID shut down every conference and meetup in 2020, I pivoted HERE's developer relations to fully digital, onboarding 15,000 developers and generating 100+ media articles through tutorials, video content, and community engagement. AI-Native Practitioner I'm not just talking about AI. I've built RAG-based tutorial generation pipelines, custom social monitoring tools for Bluesky and Mastodon, and I hold a Prompt Engineering certification from the team behind Facebook's Llama project. I bring hands-on GenAI fluency to everything I do. Content at Scale Across my career, I've produced and directed 123+ developer tutorial videos, written technical documentation, and served as the on-camera advocate for multiple brands. I understand the full pipeline from scripting to shooting to distribution. --- ## AI agents are creating a new distribution channel for dev tools. URL: https://philjohnstonii.com/consulting Summary: Fractional DevRel consulting for API-first companies. Services include Fractional Head of DevRel ($5,000/mo), DevRel Program Audit ($2,500), and AI DevRel Accelerator ($3,500-4,000). Track record with GoPro, HERE Technologies, and 1Password. AI agents are creating a new distribution channel for dev tools. Who is this for? For founders, product leads, and marketing leads at API and SDK companies (5-50 employees) who know the buyer is changing but don't have a playbook yet. Fractional Head of DevRel 8 hrs/week ongoing engagement. Strategy, content, developer onboarding, metrics. DevRel Program Audit One-time deep-dive into existing DevRel efforts. Deliverable: roadmap + recommendations. AI DevRel Accelerator Make your developer tools discoverable by AI coding agents and LLMs. Includes llms.txt setup, AI-optimized documentation, and a discoverability audit across major AI assistants. GoPro : Built program from 0 to 330+ partners (NASA, BMW, Google) HERE Technologies : Onboarded 15,000+ developers, grew market 2,250x 1Password : Established DevRel vision, managed open source program (+900 projects), technical marketing (videos) Book a 15-minute call --- Source: https://philjohnstonii.com Last updated: 2026-04-08