GenossGPT

Genoss GPT

One line replacement for openAI ChatGPT & Embeddings powered by OSS models

Genoss

Genoss is a pioneering open-source initiative that aims to offer a seamless alternative to OpenAI models such as GPT 3.5 & 4, using open-source models like GPT4ALL.

Project bootstrapped using Sicarator

Features

Demo

Chat Completion and Embedding with GPT4ALL

https://github.com/OpenGenenerativeAI/GenossGPT/assets/19614572/9cfd4f69-6396-4883-b94d-e94dd76663dc

Supported Models

Starting Up

Before you embark, ensure Python 3.11 or higher is installed on your machine.

Install the server

:warning: we are currently in prepublish.

pip install genoss

Install the latest version from this repository

pip install git+https://github.com/OpenGenerativeAI/GenossGPT.git@main\#egg\=genoss

Run the server

genoss-server
# To know more
genoss-server --help

Access the api docs via http://localhost:4321/docs .

Models Installation

Install GPT4ALL Model The first step is to install GPT4ALL, which is the only supported model at the moment. You can do this by following these steps: 1. Clone the repository: ```bash git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git ``` 2. Navigate to the backend directory: ```bash cd gpt4all/gpt4all-backend/ ``` 3. Create a new build directory and navigate into it: ```bash mkdir build && cd build ``` 4. Configure and build the project using cmake: ```bash cmake .. cmake --build . --parallel ``` 5. Verify that libllmodel.* exists in `gpt4all-backend/build`. 6. Navigate back to the root and install the Python package: ```bash cd ../../gpt4all-bindings/python pip3 install -e . ``` 7. Download it to your local machine from [here](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin) and put it in the `local_models` directory as `local_models/ggml-gpt4all-j-v1.3-groovy.bin`

Running the Application

You need to install poetry and a valid python version (3.11*).

poetry install

For more, on a complete install for development purpose, you can check the CONTRIBUTING.md. If you simply want to start the server, you can install with the corresponding poetry groups :

poetry install --only main,llms

After the Python package has been installed, you can run the application. The Uvicorn ASGI server can be used to run your application:

uvicorn main:app --host 0.0.0.0 --port 4321

This command launches the Genoss application on port 4321 of your machine.

Running the Webapp Demo

In the demo/

cp .env.example .env

Replace the values and then

PYTHONPATH=. streamlit run demo/main.py

Genoss API Usage

The Genoss API is a one-line replacement for the OpenAI ChatGPT API. It supports the same parameters and returns the same response format as the OpenAI API.

Simply replace the OpenAI API endpoint with the Genoss API endpoint and you’re good to go!

Modify the models to the supported list of models and you’re good to go!

You can find the API documentation at /docs or /redoc.

Screenshot of api documentation

Upcoming Developments

While GPT4ALL is the only model currently supported, we are planning to add more models in the future. So, stay tuned for more exciting updates.

The vision:

Screenshot of vision diagram

History

Genoss was imagined by Stan Girard when a feature of Quivr became too big and complicated to maintain.

The idea was to create a simple API that would allow to use any model with the same API as OpenAI’s ChatGPT API.

Then @mattzcarey, @MaximeThoonsen, @Wirg and @StanGirard started working on the project and it became a reality.

Contributions

Your contributions to Genoss are immensely appreciated! Feel free to submit any issues or pull requests.

Thanks go to these wonderful people:

Sponsors ❤️

This project could not be possible without the support of our sponsors. Thank you for your support!

Theodo Aleios Sicara

License

Genoss is licensed under the Apache2 License. For more details, refer to the LICENSE file.