VSCode Extension: [Blackbox AI Code Generation, Code Chat, Code Search]
Category: Artificial Intelligence
Hugging Face
[Hugging Face] is an open-source library for natural language processing (NLP) that provides a wide range of pre-trained models and tools for text classification, sentiment analysis, question answering, and more. It was created by the Hugging Face team, a group of researchers and engineers from various institutions and companies, who are passionate about advancing the field of NLP.
Hugging Face offers a variety of pre-trained models, including transformer-based models like BERT, RoBERTa, and XLNet, as well as other types of models like recurrent neural networks (RNNs) and convolutional neural networks (CNNs). These models can be used for a wide range of NLP tasks, such as text classification, sentiment analysis, question answering, and more.
One of the unique features of Hugging Face is its modular architecture, which allows users to easily integrate new models or customize existing ones to fit their specific needs. This makes it easier for developers and researchers to build and train NLP models without having to start from scratch.
Hugging Face also provides a number of tools and resources for working with NLP models, including a command-line interface (CLI) for easy model training and deployment, as well as a library of pre-trained models that can be easily integrated into a variety of applications.
Overall, Hugging Face is a powerful tool for anyone interested in working with NLP models, from beginners to experts, and it has already been widely adopted in the NLP community.
(Llama2)
pinokio.computer
[Pinokio.computer] is a platform that allows developers to build, train, and deploy machine learning models more easily. It was created by the same team behind Hugging Face, with the goal of providing a more streamlined and efficient way to work with NLP models.
Pinokio.computer provides a number of features that make it easier to work with NLP models, including:
- Pre-trained models: Pinokio.computer offers a variety of pre-trained NLP models, including BERT, RoBERTa, and XLNet, as well as other types of models like RNNs and CNNs. These models can be easily integrated into a wide range of applications.
- Modular architecture: Pinokio.computer’s modular architecture allows users to easily integrate new models or customize existing ones to fit their specific needs. This makes it easier for developers and researchers to build and train NLP models without having to start from scratch.
- Easy deployment: Pinokio.computer provides a simple and easy-to-use interface for deploying NLP models, allowing users to quickly and easily integrate their models into a variety of applications.
- Integration with popular frameworks: Pinokio.computer is designed to work seamlessly with popular NLP frameworks like TensorFlow and PyTorch, making it easier for developers to incorporate NLP models into their existing workflows.
- Community support: Pinokio.computer is an open-source platform, which means that the community can contribute to its development and growth. This allows users to benefit from a growing library of pre-trained models and tools, as well as feedback and support from other developers and researchers in the field.
Overall, Pinokio.computer is designed to make it easier for developers to work with NLP models, by providing a more streamlined and efficient way to build, train, and deploy these models.
(Llama2)
How Ollama Does What it Does
OpenAI’s “Sora” ACTUALLY STUNS Entire Industry – AGI, Emergent Capabilities, and Simulation Theory
Bringing GLaDOS to life with Robotics and AI
Ollama Copilot (OCR) v1.0.4
The Most Insane Week of AI News So Far This Year!
NEW Chinese AI Chips: 3 Big Problems
AI Video From OpenAI Just Blew Everyone’s Minds!
Ollama Windows Preview is here
Meshy AI
[Meshy AI] creates 3D meshes from text prompts.
Raising $7T For Chips, AGI, GPT-5, Open-Source | New Sam Altman Interview
Function Calling in Ollama vs OpenAI
The SIMPLE Way to Build Full Stack AI Apps (Tutorial)
Ollama – Libraries, Vision and Updates
Importing Open Source Models to Ollama
How I’d Learn AI (If I Had to Start Over)
EasyOCR
Finally Ollama has an OpenAI compatible API
Gemini Ultra is Here! (Google’s “ChatGPT Killer”)
wait.. did Google ACTUALLY Pull This Off? Gemini Ultra FULL REVIEW
Google’s GEMINI ULTRA 1.0 Just Dropped (First Look) – Beats GPT4?
Apple’s AI Era Has Begun…
Adding Custom Models to Ollama
- LlamaForCausalLM
- MistralForCausalLM
- RWForCausalLM
- FalconForCausalLM
- GPTNeoXForCausalLM
- GPTBigCodeForCausalLM
[mistral7b_ocr_to_json_v1] Architecture=MistralForCausalLM
[Finetune LLM to convert a receipt image to json or xml]
On WSL2:bash <(curl -sSL https://g.bodaay.io/hfd) -h./hfdownloader -s . -m mychen76/mistral7b_ocr_to_json_v1
Windows Command Prompt:docker pull ollama/quantize
After download completes:docker run --rm -v .:/model ollama/quantize -q q4_0 /model

Create a Modelfile text file with no extension:FROM ./q4_0.bin
TEMPLATE [INST] {{ .System }} {{ .Prompt }} [/INST]
Create a folder for the model in the docker container:docker exec -it ollama mkdir /model
Copy all the local files into the container /model folder.docker cp . ollama:/model
Create the model:docker exec -it ollama ollama create mychen76_mistral7b_ocr_to_json_v1 -f /model/Modelfile
Run the model:docker exec -it ollama ollama run mychen76_mistral7b_ocr_to_json_v1
Make any changes to the Modelfile and copy the changes to the container:docker cp Modelfile ollama:/model
Repeat the create model step as needed.