Latest Open Source Projects
open-r1
open-r1
TLDR: This repo is a fully open reproduction of DeepSeek-R1. The goal is to build the missing parts of the R1 pipeline. It includes scripts for training and evaluating models, generating synthetic data, and has a Makefile for easy running of commands. The plan of attack involves replicating R1-Distill models, creating a pure RL pipeline, and showing multi-stage training. It supports training with DDP or DeepSpeed ZeRO-2 and ZeRO-3, and provides different methods for evaluating models depending on their size and hardware requirements.
RAT-retrieval-augmented-thinking
RAT-retrieval-augmented-thinking
TLDR: RAT is a tool that enhances AI responses by leveraging DeepSeek's reasoning capabilities and combining it with various response models like OpenRouter. It offers features like model selection, reasoning visibility, and context awareness.
WebRover
WebRover
TLDR: WebRover is an autonomous AI agent that uses advanced language models and web automation tools to navigate the web, gather information, and provide structured responses. It has features like AI-powered navigation, smart element detection, visual feedback, and autonomous operation. It has a backend based on Python, LangChain, Playwright, OpenAI GPT-4, and FastAPI, and a frontend based on Next.js, TypeScript, Tailwind CSS, and Framer Motion. It requires setting up environment variables and has instructions for both backend and frontend setup. Contributions are welcome and it is licensed under the MIT License.
evabyte
evabyte
TLDR: This repository focuses on EvaByte, a 6.5B byte-level language model. It has an improved architecture with multibyte prediction and an efficient attention mechanism EVA. Trained on 1.5T bytes of data, it performs well in coding tasks and decodes fast. The repo provides model implementation based on Huggingface `transformers` library and inference examples, including different generation modes and relevant notes and limitations. Evaluation methods and citation information are also provided.
web-ui
web-ui
TLDR: This project builds on browser-use, offers a user-friendly WebUI with support for various LLMs, allows custom browser usage and persistent browser sessions. It has installation options like local and Docker, and provides different themes and settings. Changelog shows updates like adding DeepSeek-r1 support, Docker setup and keeping browser open between tasks.
youtube
youtube
TLDR: This repository contains scripts for various YouTube video processing tasks such as audio to text conversion, audio to subtitle conversion, video resolution conversion for YouTube Shorts, subtitle text processing, and video splitting into short clips.
AI-reads-books-page-by-page
AI-reads-books-page-by-page
TLDR: This repository contains a script that performs page-by-page analysis of PDF books, extracting knowledge points and generating summaries. It offers features like automated analysis, AI-powered content understanding, interval summaries, and customizable options. The script can be set up by cloning the repository, installing requirements, and configuring constants. It works by setting up directories, loading an existing knowledge base, processing pages, generating summaries, and saving the results.
deepseek-engineer
deepseek-engineer
TLDR: A coding assistant application that integrates with DeepSeek API. It can process user conversations, generate JSON responses, read local files, create new files, and apply diff edits. It has features like DeepSeek client configuration, data models, helper functions, and an interactive session.
Aria-UI
Aria-UI
TLDR: Aria-UI is a model that handles diverse grounding instructions for GUI, is context-aware, lightweight and fast, and achieves superior performances on agent benchmarks. It can be installed and used with vllm or Transformers.
pasa
pasa
TLDR: This repo introduces PaSa, an LLM-powered paper search agent. It can make autonomous decisions for complex scholarly queries. Optimized with reinforcement learning and synthetic data, PaSa outperforms baselines. It has two agents, Crawler and Selector, and uses two datasets. Instructions for quick start, running locally, and training are provided.