> For context, I have zero knowledge of Swift, no coding experience whatsoever, and don’t even know how to use Xcode
Given their lack of expertise I can almost guarantee that they're just referring to the act of uploading an associated knowledge base (collection of files) as part of the act of creating a custom GPT [1].
I suppose they could also try using the GPT Actions which lets you connect external data sources (so something like the equivalent of the Context7 MCP) but it's doubtful given their experience.
My guess is the LLM was then an expert on the user guide…so as long as you asked it questions about standard by the book use cases then it was perfectly well trained.
But building something from those use cases? If one of them was a iPhone podcast app then the dev would be in the possession of an iPhone podcast app.
Ideally they should have trained the llm on a few dozen iPhone podcast apps :/
I could build 1 app for both iOS and Android in just 2 workdays with Claude Code about 2 months ago!
That was before Opus 4.1 and Codex CLI with GPT 5 High. It would take less time now.
“I created a custom GPT, then trained it on all the Swift, Swift UI, and Apple developer documentation”
I assume OP means he uploaded the documentation pdfs but its unclear to me, would be interesting to hear what this means.
From the article:
> For context, I have zero knowledge of Swift, no coding experience whatsoever, and don’t even know how to use Xcode
Given their lack of expertise I can almost guarantee that they're just referring to the act of uploading an associated knowledge base (collection of files) as part of the act of creating a custom GPT [1].
I suppose they could also try using the GPT Actions which lets you connect external data sources (so something like the equivalent of the Context7 MCP) but it's doubtful given their experience.
- [1] https://help.openai.com/en/articles/8554397-creating-a-gpt
My guess is the LLM was then an expert on the user guide…so as long as you asked it questions about standard by the book use cases then it was perfectly well trained.
But building something from those use cases? If one of them was a iPhone podcast app then the dev would be in the possession of an iPhone podcast app.
Ideally they should have trained the llm on a few dozen iPhone podcast apps :/