Category: AI


  • In this post I’m going to cover how you can run a local LLM (large language model) of the stype of chatGPT on your own computer. We will be using the Vicuña model that has been quantized. What does this all mean!? Well, this initially facebook released a “fundational model” called LLaMa. Using this “base […]

  • Here is a quick guide to run your own local large language model, that has been fined tuned with the alpaca instructions on google colab. The model is called “flan-alpaca-large”. First set the runtime to GPU. Then run this code, you will need to uncoment the !pip install transformer call the first time you run […]