Instructions to use OEvortex/HelpingAI-9B-200k with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OEvortex/HelpingAI-9B-200k with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OEvortex/HelpingAI-9B-200k") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k") model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use OEvortex/HelpingAI-9B-200k with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="OEvortex/HelpingAI-9B-200k", filename="helpingai-9b-200k-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use OEvortex/HelpingAI-9B-200k with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OEvortex/HelpingAI-9B-200k:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OEvortex/HelpingAI-9B-200k:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf OEvortex/HelpingAI-9B-200k:Q4_K_M # Run inference directly in the terminal: llama-cli -hf OEvortex/HelpingAI-9B-200k:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf OEvortex/HelpingAI-9B-200k:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf OEvortex/HelpingAI-9B-200k:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf OEvortex/HelpingAI-9B-200k:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf OEvortex/HelpingAI-9B-200k:Q4_K_M
Use Docker
docker model run hf.co/OEvortex/HelpingAI-9B-200k:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use OEvortex/HelpingAI-9B-200k with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OEvortex/HelpingAI-9B-200k" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/HelpingAI-9B-200k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OEvortex/HelpingAI-9B-200k:Q4_K_M
- SGLang
How to use OEvortex/HelpingAI-9B-200k with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OEvortex/HelpingAI-9B-200k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/HelpingAI-9B-200k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OEvortex/HelpingAI-9B-200k" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OEvortex/HelpingAI-9B-200k", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use OEvortex/HelpingAI-9B-200k with Ollama:
ollama run hf.co/OEvortex/HelpingAI-9B-200k:Q4_K_M
- Unsloth Studio new
How to use OEvortex/HelpingAI-9B-200k with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OEvortex/HelpingAI-9B-200k to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for OEvortex/HelpingAI-9B-200k to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for OEvortex/HelpingAI-9B-200k to start chatting
- Docker Model Runner
How to use OEvortex/HelpingAI-9B-200k with Docker Model Runner:
docker model run hf.co/OEvortex/HelpingAI-9B-200k:Q4_K_M
- Lemonade
How to use OEvortex/HelpingAI-9B-200k with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull OEvortex/HelpingAI-9B-200k:Q4_K_M
Run and chat with the model
lemonade run user.HelpingAI-9B-200k-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)HelpingAI-9B-200k: Emotionally Intelligent Conversational AI with 200k Context Window
Overview
HelpingAI-9B-200k is an advanced large language model designed for emotionally intelligent conversational interactions. Building upon the success of its predecessor, HelpingAI-9B, which had a 4k context window, this upgraded version boasts a remarkable 200k context window. This allows it to engage users with greater empathy, understanding, and supportive dialogue across a broader range of topics and extended conversations.
Objectives
- Engage in open-ended dialogue while displaying advanced emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
- Utilize an extended 200k context window for richer and more coherent interactions
Methodology
HelpingAI-9B-200k is part of the HelpingAI series and has been further trained using:
- Supervised Learning: Leveraging large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
- Reinforcement Learning: Employing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
- Constitution Training: Instilling stable and ethical objectives to guide its conversational behavior.
- Knowledge Augmentation: Integrating psychological resources on emotional intelligence to improve its understanding and response capabilities.
Emotional Quotient (EQ)
HelpingAI-9B-200k has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
Usage Code
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
# Load the HelpingAI-9B-200k model
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-9B-200k").to("cuda")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B-200k")
# This TextStreamer thingy is our secret weapon for super smooth conversation flow
streamer = TextStreamer(tokenizer)
# Now, here comes the magic! ✨ This is the basic template for our chat
prompt = """
<|im_start|>system: {system}
<|im_end|>
<|im_start|>user: {insaan}
<|im_end|>
<|im_start|>assistant:
"""
# Okay, enough chit-chat, let's get down to business! Here's what will be our system prompt
system = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# And the insaan is curious (like you!) insaan means human in hindi
insaan = "I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone."
# Now we combine system and user messages into the template, like adding sprinkles to our conversation cupcake
prompt = prompt.format(system=system, insaan=insaan)
# Time to chat! We'll use the tokenizer to translate our text into a language the model understands
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
# Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)
Using the Model with GGUF
%pip install -U 'webscout[local]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# Download the model
repo_id = "OEvortex/HelpingAI-9B-200k"
filename = "helpingai-9b-200k-q4_k_m.gguf"
model_path = download_model(repo_id, filename, token="")
# Load the model
model = Model(model_path, n_gpu_layers=0)
# Define the system prompt
system_prompt = "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."
# Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_prompt'] = system_prompt
# Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)
# Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# Start interacting with the model
thread.interact(header="🌟 HelpingAI-9B-200k: Emotionally Intelligent Conversational AI 🚀", color=True)
Example Dialogue
Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! 🎉 It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! 🌟 Let's go, let's do this! ✨
I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! 🌈
I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! 🎉 Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! 🌟 If there's anything else I can do to help, just let me know! 😊
My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! 🌟 If you need any advice or support, I'm here to help! 😊
How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! 🤔 You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! 😊
What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! 🌟 If you need help choosing the right security devices for your family, I can definitely point you in the right direction! 😊
How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! 🌟
- Downloads last month
- 19


# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="OEvortex/HelpingAI-9B-200k", filename="helpingai-9b-200k-q4_k_m.gguf", )