FREE

Exploring Open Source Models with Text Generation WebUI

Are you looking to prototype with open source LLMs? Text Generation WebUI allows you to customize web interfaces and experiment with diverse LLM model backends whilst working in a clean, simple user interface.

Learn

Text Generation WebUI

text-generation-webui is a versatile tool for Large Language Models (LLMs) that offers seamless web interface integration...More

Experience

Mike Taylor

Built a 50-person growth agency.
Logo
Logo
Logo
💪 Useful 0
😓 Difficult 0
🎉 Fun 0
😴 Boring 0
🚨 Errors 0
😕 Confusing 0
🤓 Interesting 0
Free access for email subscribers.
Python experience recommended.
1. Scenario
UPSERT HEADQUARTERS – LATE EVENING
It's late in the evening and you're at Upsert Headquarters with William Maple, Commercial Director of Fizzy Love Drinks Co. He's asked you to explore open source models with Text Generation WebUI. With the help of this tutorial, you'll use a GPU, download the repository, move models into the folder and run a command to use the WebUI. You can use it to experiment with AI, change parameters, upload models, create a chat, and change a character's greeting.
Maple
at Fizzy Love Drinks Co.

We need to be able to create content that is more dynamic and engaging for our customers.

Using this open source platform with Text Generation WebUI will enable us to do just that.

We can quickly generate content that is tailored to our target audience.

Let's get to work!

This course is a work of fiction. Unless otherwise indicated, all the names, characters, businesses, data, places, events and incidents in this course are either the product of the author's imagination or used in a fictitious manner. Any resemblance to actual persons, living or dead, or actual events is purely coincidental.

2. Brief

If you are keen to explore open source models like LLAMA2 and Mistral, the Text Generation Web UI is a remarkable tool to consider. This interface operates much like the well-known automatic1111 stable diffusion’s web UI, but for text generation. One of its prime benefits is that it works offline and doesn't incur any costs associated with services like OpenAI.

To optimize your experience with this tool, having a GPU is crucial; ideally, an M2 Mac or a gaming PC equipped with an Nvidia GPU would suffice. Start by downloading the Github repository, following this, acquire the model from a reputable source, such as Hugging Face, and move it to the models folder within the repository.

Proceed by running 'start mac.sh' or 'start.bat' via your terminal or command prompt. Despite potential errors that might pop up, these should not prevent the UI from running on your system locally. Once the UI is operational, create a new chat and select the desired model from the available list. After loading your chosen model, you're all set to initiate a conversation.

The UI offers customizable parameters. For example, you can configure it to utilize your GPU, alter the random seed to yield more consistent results, or even refine the model's settings to fit your specific needs, such as adjusting token probabilities - an option not typically available in conventional chatbots.

Moreover, some models like LLAMA support grammatical structures enabling the generation of valid JSON output. You can also fine-tune prompts by setting limits on token counts and defining custom stopping conditions.

3. Tutorial

 Okay guys, I'm going to talk you through the text generation web UI. If you haven't seen this before this is a tool you can use to run on your local computer to use open source models like llama two or Mistral.

4. Exercises
5. Certificate

Share This Course