created by Andrew with midjourney

AI can be local and personal first

While the world is distracted and wringing their hands over OpenAI's ChatGPT, some Stanford grad students quietly released Alpaca. It replicates 95% of GPT-3 for $600 and runs locally on your devices. It's been ported to fairly modern CPUs, GPUs, and embedded devices (think ARM v8 CPUs). I think this is a far more important moment in machine learning software than whatever OpenAI is marketing. You can have your own "bot" running locally and on your own hardware, trained in the nuances you want reflected back at you. 

The open source world is slowly experimenting with various machine learning (ML) models. Over time, we'll see more and more great projects released, experiments run, and like all open source, some will succeed while others fail. The optimistic view is that, much like Linux and BSD, the world can learn, play, and develop new uses for ML. 

I've already integrated "GPT-like" programs into my workflow for coding, for writing, and for general information searching. There may be great distruption coming, but I suspect the great disruption is on those who figure out how to augment themselves with "personal bots". Many embedded devices, including mobile computers (aka phones), already include an NPU (Neural Processing Unit) which an do matrix math very quickly. Matrix math being the core of machine learning. Where ML coding/writing helps tremendously is fleshing out ideas or frameworks. It's often poorly done, but that's a reflection of the training data. Imagine training a ML model on only the most advanced code or your favorite authors. The results will reflect that greatness modulo the ML algorithms not screwing it up. 

I have a number of fantasy projects I want to do, in no particular order:

  • Train a model on the Nielsen Norman Group writing, training, studies to design very usable interfaces with ease.
  • Train a model to read and summarize public company filings like Warren Buffett and Charlie Munger. (Others have started this here and here.)
  • Train a model to read public company filings and find opportunities for shorting companies for great profit. 
  • Train a model on F1 drivers driving and use it to coach amateur race car drivers to improve their laptimes. 
  • Train a model on the best personal financial management advice from actual economists. The goal being to improve your financial health.
  • Approach the Shannon Limit for bandwidth usage in a secure and power efficient way. Think making WiFi as reliable and performant as fiber. Think making normal fiber 100x faster.

The list continues as the imagination wanders. What I'm fundamentally doing is encapsulating rare talent into a model, and then using it to augment our own capabilities. The other point of all these ideas are that they run locally on your own hardware, possibly disconnected from the Internet. 

Releases like Stanford Alpaca and Stable Diffusion bring the power of ML to the masses. It's up to us what we do with this power.