30 Areas AI Takes Over

Random Observation/Comment #776: AI can easily write lists of 30 for me. Maybe I need to spend my time on things that aren’t easily replaced by data processing and optimized decision making algorithms.

Why this List?

Continuing on the Oct 2022 trend of thinking about tech more broadly, I wanted to map out how AI can be applied to different industries in a game changing way. I used to write machine learning algorithms for robotics and autonomous vehicle research, but haven’t revisited since my days hacking with fintech markets data. Time to dig in and poke around with some of the latest libraries.

The evolution of rule-based games into Artificial General Intelligence (AGI) is fascinating to me. The levels of AI in my mind follow:

  • Interpolation – Train on a set and be able to create a version after understanding the patterns (e.g. Ingest millions of cat photos and show an average version of a cat)
  • Extrapolation – Based on the training set, consider how to build off of similar versions (e.g. extract with out painting into creative aspects where you generate and replace specific textures)
  • Full on creativity – We haven’t seen this just yet (without any input from users), but I can imagine simulations being run off of in-painting and out-painting that just creates without rules (e.g. being able to create something new with new rules like writing creative software)

With enough data, everything can be Interpolation, so I continue thinking about the methods the data set can be aggregated across and grow from usage.

  1. AI in video games – This is probably the earliest version of bots learning from playing different simulations with human feedback. The algorithm of balancing a game is also interesting here as we see more economics come in with in-game micro-purchases. Training set is much larger because you can run AI against AI simulations for playing well defined rules in games.
  2. Driverless semis and taxis – Combinations of rules and reactions to the data input is incredibly complex. Training set includes all the existing autopilot cars currently receiving videos from all Teslas today. The collection of data just keeps growing, so I imagine this to accelerate if we can keep up with the processing.
  3. Mid-Journey / Dall-E winning art contests – I still think there’s value in drawing, but I wouldn’t be surprised if a lot of mass consumed content gets flooded and saturated with generated images. Training set being all the images processed by OpenAI. I imagine it also learns from the text-to-image examples and images uploaded to Google Photos.
  4. GPT-3 writing essays – Do we even need to write anymore? Can we be even more creative with words if we’re publishing our data back to the learning? Training set being all the written works with a particular writer, genre, or style of writing (e.g. including scientific papers)
  5. Meta’s make-a-video AI – The bear looks pretty creepy. Extrapolating to video means they’re also applying it to VR in some way. Training set is in layers, but likely comes from images and video actions that get overlaid and applied to a particular journey
  6. Google AI extrapolating from a single picture a 3d fly through – This is so cool. Being able to see a landscape and do a fly-through seems very complex. A naive way would just be to find similar photos to stitch it together with blurred motions, but I think this is really live processing leading to new landscapes that have never been created. I don’t even know what the training set looks like when you’re extrapolating.
  7. AI creating musicLost Tapes of the 27 Club created from learning Amy Winehouse songs. Training set would be all of Amy Winehouse songs with mix and inspiration with other artists
  8. AI Chatbots for mental health – Empathizing is beyond language processing. We’re not just trying to repeat and share words of wisdom, but trying to ask questions and build a history of relationships. Training set uses past conversations, saving ideas/contexts/history, while also using natural language processing to generate a meaningful interaction.
  9. AI helping with language translation and voice inflections – Duolingo and Google Translate provide incredible crowdsourcing tools with trainings on how to generate proper understanding of context and leveraging the right words. I’m sure this will lead to the futuristic Mission Impossible listening to a phrase and recreating a whole voice. Training set here is all crowdsourcing with some assisted corrections by doing quizzes, playing games, and recorded responses.
  10. Fully conversant bot showing resemblance of sentiency and passing the turing test – This was an interview of the GPT3 robot which is pretty scary. Deus Ex Machina. Much more recently, we’ll see the Optimus robot from Tesla AI Day conference growing further. Training set is dependent on the real time image processing and decision making framework.
  11. Consumer interaction in food industry – By using predictive analytics, you can maximize yield/profit for specific on-screen recommendations of menus during checkout. Training set on orders and completed additions. Lots of data to collect and test with the millions of burgers sold per day.
  12. Virality generation with AI applied to marketing – This is basically the bot world we live in with “enhanced artificial presence.” Which bots need to be created? How deep of a background would be generated as sleeper influencers before they’re activated for propaganda? I’d be curious to see how AI can learn the best avenues of pushing trending news and information. This could effectively impact marketing jobs by doing pay-as-you-go guaranteed methods of multi-channel distribution. Training on analytics of successful re-share and rehash of the same major news.
  13. AI-written movie scripts – I love the idea of learning Tarantino movie setups and seeing what a new subject of interest could be written after he finishes his 10 films. They will likely also convert these AI written scripts into video (but is an AI director possible?)
  14. AI selfies – The overlay of human expressions to any image is pretty impressive and already available on apps. This is more of augmented reality applied to image manipulation and blinking faces, but it’s pretty impressive.
  15. IBM Watson for trivia and knowledge – With all of human knowledge in your pocket, it’s easy to see different APIs and data collection software take further advantage of the massive amount of content created for learning.
  16. Recreating metaverse from real world photos – We’ll probably see the combination of all of the images collected from phones uploaded to create a new VR space. If we’re all voluntarily contributing then Google Earth aggregated by smart phone uploads, we can definitely generate a second Earth.
  17. Applied to genomics via Alpha Fold 2 – Structural biology with protein folding is so amazing. I can’t believe we’re able to generate 3d images off of just protein interactions. The craziness is predicting reactions and regenerating the biochemistry.
  18. Creating virtual cells – Building the known interactions so you can run simulations rather than waiting for growth is also super cool (and super complex). I think the virtual cell is just the first step, but possible/manageable. We’re basically recreating the learning of physics within these complex neural networks.
  19. Applied to complex chemical compounds – Helpful or deadly compound simulations using the training set of all diseases was in a Radiolab episode recently. I am a little concerned that you can do this.
  20. Disease diagnosis – I’m pretty sure WebMD has all the data from their search to give trends on what people are worried about or how those symptoms lead to more complex diseases. With enough data, this just seems like a simple lookup table. The hard part is connecting multiple data points from regular scans that would provide a forward looking indicator of health conditions. This would include genomics, but likely also require more complex mappings to habits and other IOT device real time data. The ultimate output of this would be applied to insurance.
  21. Applied to end-to-end education – Education is currently so ancient. I don’t see why tests taken don’t provide new and different ways of teaching concepts based on global levels of leaning. The feedback might take months, but learning can be a platform and foster passions. Training set would be platform activities, tests, and overlap with growth paths of your kid
  22. Generating business websites off of a collection of website designs – The whole internet is filled with html and css code that has a certain appeal. Even if you just aggregate across a number of WordPress templates, I’m sure you can disrupt web design with 80% recommendations leading to 20% curation and tweaking.
  23. Building optimized semiconductors and processors – Learning from previous building blocks and launching systems that cover the general purpose computations would make sense. A redesigned version of a microchip layout based on optimizations of distances might be an algorithm instead of AI, but the optimization towards applying a general component setup could be pretty creative.
  24. Energy and climate improvements – Optimizing techniques for better batteries or new experiments on impacts of macro relationships to the world as an ecosystem would be very complex. Probably the more discrete work would be in making batteries better through different component combinations.
  25. Neural link – Understanding how the brain works and applying different commands to the network seems to be one of the bets Elon is making. I imagine our meat fingers won’t be typing that much longer.
  26. Keyboard machine learning – I think the virtual keyboards on phones are fascinating. Every keyboard learning your predictive text and mistakes means that you’re typing faster and with better accuracy. There’s also tons of data for the training set as you keep interacting with the interface.
  27. OpenAI making memes – I’d love to see a meme generation engine that can also issue them into NFTs. Short lived job of being a meme generator now is the configuration of meme generation and distribution. Training set could be all the unique memes on reddit.
  28. General purpose AI for solving many tasksGato learns how to beat 100s of Atari games through simulations and reinforcement learning from the controller/input side while optimizing on high scores. This is pretty impressive step for AGI, but doesn’t necessarily mean it’ll do everything. Maybe a combination of AIs per specific area is the best approach.
  29. Writing software off of text prompts – This does seem quite complex since software can do anything. Maybe all design and simple overhead is removed with the design components and then you just try to formulate what type of software works? Maybe what’s more practical is describing what the code does and mapping code documentation and executed outcomes to code? The new version of WYSIWYG for web design could be a “What You Describe Is What You Get” for AI
  30. Tiktoks Content Recommendations based on watched videos – Superior and far simpler algorithm to optimize than the old way of creating feature, categorization and tagging multi-dimensional spaces and finding shortest path similarities. This might not be a complex learning algorithm representing people’s virtual interests, but just a simple algorithm based on what’s available to view. I still think the outcome is a learned set of complex interests understood by the algorithm to feed content that’s more addictive. Training set includes all the users swiping, liking, and watching until end on videos and further testing loops to hone in on interests and attention variables (e.g. switch to old interests when current ones are looking stale).

~See Lemons Hail the new Robot Overlords