Preset
Background
Text
Font
Size
Width
Account Sunday, April 5, 2026

The Git Times

“The best way to predict the future is to invent it.” — Alan Kay

AI Models
Claude Sonnet 4.6 $15/M GPT-5.4 $15/M Gemini 3.1 Pro $12/M Grok 4.20 $6/M DeepSeek V3.2 $0.89/M Llama 4 Maverick $0.60/M
Full Markets →

Apfel Brings Apple's On-Device LLM to the Command Line 🔗

Swift utility wraps FoundationModels framework to deliver zero-cost local inference through CLI and OpenAI-compatible server

Arthur-Ficial/apfel · Swift · 1.7k stars 1w old · Latest: v0.8.1

apfel solves a specific constraint facing Apple Silicon developers: Apple's powerful on-device language model ships with every compatible Mac but remains locked inside Siri and system features. The FoundationModels framework, introduced with macOS 26 Tahoe, provides the technical path to that model, yet Apple offers no general-purpose interface for it.

The project delivers both a UNIX-style command-line tool and an HTTP server, all written in Swift and running entirely on-device.

apfel solves a specific constraint facing Apple Silicon developers: Apple's powerful on-device language model ships with every compatible Mac but remains locked inside Siri and system features. The FoundationModels framework, introduced with macOS 26 Tahoe, provides the technical path to that model, yet Apple offers no general-purpose interface for it.

The project delivers both a UNIX-style command-line tool and an HTTP server, all written in Swift and running entirely on-device. No API keys, no network requests, and no external model downloads are required. Inference uses the built-in Apple Intelligence model with a 4096-token context window. Installation targets users who already meet the baseline: an Apple Silicon Mac, macOS 26 or newer, and Apple Intelligence enabled.

As a command-line citizen, apfel behaves like a standard Unix utility. It accepts piped input, supports the -f flag for attaching one or more files, streams output when requested, and returns structured JSON or conventional exit codes. Typical patterns include piping documentation for summarization or feeding source files with context: echo "Summarize: $(cat README.md)" | apfel or apfel -f README.md "Analyze this project".

The --serve mode launches an OpenAI-compatible endpoint at localhost:11434. Any existing OpenAI SDK or client can redirect to it with only an endpoint change, allowing developers to swap between cloud and local models without altering application logic. Tool calling receives first-class support, including schema conversion and full round-trip execution.

Version 0.8.1, released in early April 2026, added several practical improvements. The server now performs MCP auto-execution when configured, matching CLI behavior by returning final text answers while still exposing raw tool_calls for clients that need them. Chat mode (apfel --chat) gained proper line editing through libedit, delivering arrow-key navigation, history recall, and inline editing. Additional changes include stricter model name validation (only apple-foundationmodel is accepted) and more robust parsing of malformed tool-call JSON inside code blocks.

The project ships 122 unit tests and 121 integration tests, reflecting disciplined engineering for a tool that sits at the intersection of system frameworks and developer workflows. It can be installed via Homebrew after tapping Arthur-Ficial/tap, or built from source with the macOS 26.4 SDK using the included Makefile.

For builders, apfel matters because it removes artificial barriers between Apple's on-device intelligence and the programmable environment developers actually inhabit. Shell scripts, build pipelines, local agents, and privacy-sensitive applications can now treat the system's language model as just another reliable tool.

Use Cases
  • Mac developers piping documents to local LLM for summarization
  • Engineers running OpenAI SDK code against on-device inference
  • Teams implementing tool-calling agents without cloud dependencies
Similar Projects
  • ollama - Provides OpenAI-compatible local serving but requires downloading and managing separate models instead of using Apple's built-in foundation model
  • llama.cpp - Delivers efficient inference for open-source models across platforms whereas apfel focuses exclusively on Apple's FoundationModels framework
  • MLX - Offers low-level Apple Silicon machine learning primitives but lacks apfel's ready-to-use CLI, server, and tool-calling abstractions

More Stories

OmniVoice Delivers Voice Cloning Across 600 Languages 🔗

Diffusion-based TTS system offers fast inference and precise voice control

k2-fsa/OmniVoice · Python · 1.5k stars 4d old

OmniVoice is an open-source zero-shot text-to-speech system that synthesizes speech in over 600 languages from a single model. Developed by k2-fsa, the Python project uses a diffusion language model architecture that balances audio quality with practical speed.

The system supports high-quality voice cloning from short reference audio.

OmniVoice is an open-source zero-shot text-to-speech system that synthesizes speech in over 600 languages from a single model. Developed by k2-fsa, the Python project uses a diffusion language model architecture that balances audio quality with practical speed.

The system supports high-quality voice cloning from short reference audio. It also enables voice design through explicit attributes including gender, age, pitch, dialect and speaking style. Users can insert non-verbal cues such as [laughter] and correct pronunciation with phonemes or pinyin.

Inference reaches a real-time factor of 0.025, roughly 40 times faster than real time. The clean architecture simplifies further development and scaling.

Installation requires PyTorch followed by pip install omnivoice or an editable local build. The package provides a Python API, command-line tools and support for NVIDIA GPUs or Apple Silicon. Version 0.1.2 fixed MPS cloning, single-GPU fine-tuning and non-verbal audio generation.

By removing the need for language-specific training data, OmniVoice lowers the barrier for speech technology in low-resource languages while maintaining consistent quality and control.

Key technical features:

  • Zero-shot multilingual synthesis
  • Attribute-controlled voice design
  • RTF 0.025 inference speed
  • Phoneme-level pronunciation correction
Use Cases
  • Developers building voice assistants for low-resource languages
  • Researchers cloning voices for cross-lingual speech experiments
  • Teams generating localized audiobooks with consistent speakers
Similar Projects
  • XTTS - supports far fewer languages with similar cloning approach
  • Tortoise-TTS - delivers high quality but much slower inference
  • Piper - requires per-language training unlike OmniVoice's zero-shot design

AionUi Update Improves AI Agent Transparency 🔗

Version 1.9.5 adds inline reasoning display and refined interfaces for clearer control

iOfficeAI/AionUi · TypeScript · 21k stars 8mo old

AionUi has released version 1.9.5, focusing on greater visibility into how its built-in AI agents operate.

AionUi has released version 1.9.5, focusing on greater visibility into how its built-in AI agents operate.

The update introduces ACP Inline Thinking and plan display. As agents work through tasks, their reasoning now appears inline. Plan steps are deduplicated to prevent repetition, and a cleaner processing indicator shows during active execution. These changes address previous regressions in agent output clarity.

The assistant homepage and sidebar received a visual refresh. Switching between agents is now smoother, with improved layout for actions and skills market access. Message timestamps appear on hover, and the WeChat channel now supports file sending alongside text.

Bug fixes include startup detection of CLI configuration errors with actionable guidance, plus corrections for failed conversation auto-titles.

AionUi functions as a local cowork platform where AI agents read files, write code, browse the web, and execute multi-step tasks under user supervision. It ships with a complete agent engine that requires no separate CLI installations. The unified interface auto-detects and supports Claude Code, Gemini CLI, Codex, OpenClaw, Qwen Code and over a dozen others.

Remote access via web UI, messaging platform integrations, and cron-based scheduling enable 24/7 automation. The latest improvements make agent behavior more observable, helping users maintain control during complex workflows.

(178 words)

Use Cases
  • Developers running multi-agent coding tasks with file access
  • Professionals automating document workflows through scheduled agents
  • Remote users controlling AI operations from mobile devices
Similar Projects
  • OpenWebUI - offers web chat but lacks built-in agent engine
  • Continue.dev - editor integration without AionUi's 24/7 cron automation
  • Aider - CLI-focused coding assistant missing unified multi-backend UI

Repository Hosts Leaked Claude Code CLI Source 🔗

TypeScript codebase reveals Anthropic's AI agent design for terminal coding tasks

yasasbanukaofficial/claude-code · TypeScript · 1.5k stars 4d old

The GitHub repository yasasbanukaofficial/claude-code makes available the source code for Anthropic's Claude Code CLI. This advanced AI agent assists developers through a terminal-based interface. The TypeScript codebase details implementations for LLM tool-calling, agentic workflows and user interaction components.

The GitHub repository yasasbanukaofficial/claude-code makes available the source code for Anthropic's Claude Code CLI. This advanced AI agent assists developers through a terminal-based interface. The TypeScript codebase details implementations for LLM tool-calling, agentic workflows and user interaction components.

The leak originated from a sourcemap file bundled with the npm package. Source maps, intended for debugging, contained the original source code under the sourcesContent key when *.map files were not excluded from the publication.

Core features revealed include a BashTool for shell command execution and systems for managing complex coding tasks. The architecture separates the agent logic from the underlying language model, providing what the maintainer describes as the skeleton rather than the brain.

This exposure allows technical examination of how modern AI coding tools structure their operations. It demonstrates patterns for handling conversation flows, tool integration and terminal rendering in a Node.js environment.

Developers studying the code will find concrete examples of:

  • Implementing tool-calling interfaces for LLMs
  • Designing agentic loops for iterative problem solving
  • Building responsive terminal UIs for AI interactions

The repository, created on March 31, 2026, has been updated to include documentation on the discovery process. It serves as a case study in both AI system design and software supply chain security.

Use Cases
  • Developers examining TypeScript LLM tool-calling implementations in agents
  • Engineers analyzing agentic workflow patterns from production codebases
  • Researchers studying terminal UI components for AI coding tools
Similar Projects
  • aider - offers comparable terminal-based AI coding with bash tool integration
  • open-interpreter - enables AI agents to execute code in user environments
  • langchain - supplies TypeScript frameworks for building LLM tool-calling agents

Rust Terminal Agent Adds Multi-Provider AI Support 🔗

Claurst enables seamless connection to various AI services through terminal commands

Kuberwastaken/claurst · Rust · 8.1k stars 4d old

Claurst delivers a terminal coding agent implemented entirely in Rust. The tool now supports multiple AI providers through a simple /connect command, with Codex authentication currently in development.

The Rust version achieves 100% behavioral coverage of its reference specification while using substantially less memory than the original TypeScript implementation.

Claurst delivers a terminal coding agent implemented entirely in Rust. The tool now supports multiple AI providers through a simple /connect command, with Codex authentication currently in development.

The Rust version achieves 100% behavioral coverage of its reference specification while using substantially less memory than the original TypeScript implementation. It contains no tracking mechanisms and unlocks previously restricted experimental features. The codebase was built through clean-room engineering: one agent produced detailed behavioral specifications, while a separate agent wrote idiomatic Rust based solely on that documentation.

Developers report using Claurst to iterate on Claurst itself, creating a practical self-improvement loop. The binary runs efficiently in standard terminal environments, maintaining conversation context and executing suggested commands without additional infrastructure.

The project emphasizes privacy and resource efficiency. Rust's performance allows extended coding sessions without the resource spikes common in interpreted alternatives. Setup involves cloning the repository and running the binary, after which providers can be swapped at runtime.

This approach gives builders greater control over both the interface and the underlying AI services they employ, representing a technical evolution in terminal-based coding assistants.

Key technical details:

  • Multi-provider support via /connect
  • Memory-efficient Rust implementation
  • Experimental features enabled by default
  • Self-hosted development workflow
Use Cases
  • Developers connecting multiple AI providers in terminal sessions
  • Engineers using self-improving agents to build their own tools
  • Programmers seeking privacy-focused efficient command-line coding help
Similar Projects
  • aider - Python terminal AI coding with higher memory usage
  • Open Interpreter - broader terminal execution without native multi-provider
  • Continue - IDE-focused AI agent lacking dedicated terminal interface

Superfile Updates Add Video Previews to Terminal Interface 🔗

v1.5.0 brings multi-column panels, video previews and configurable navigation options for enhanced file management

yorukot/superfile · Go · 17k stars Est. 2024

superfile continues to evolve as a modern terminal file manager with the release of version 1.5.0.

superfile continues to evolve as a modern terminal file manager with the release of version 1.5.0. This update delivers major enhancements that improve how users interact with files in the terminal.

The most notable addition is preview support for videos and PDFs. By extracting the initial frame or page and rendering it as an image, users can now preview media content without switching applications.

Additionally, the new multi-column view for file panels includes columns for modification date, file size, and permissions. It offers a more informative layout similar to graphical file explorers.

Other improvements include configurable navigation shortcuts in the file panel and the ability to associate file extensions with preferred applications. The release also incorporates significant code refactoring for better maintainability along with numerous bug fixes.

Developed in Go with the Bubble Tea TUI framework, superfile provides an efficient, keyboard-driven experience. It supports installation via simple one-line scripts on macOS and Linux, or package managers on Windows.

These updates make superfile more versatile for daily file operations in terminal environments, addressing the needs of users who value both power and visual feedback.

Use Cases
  • Software developers previewing media assets within large project directories quickly
  • System administrators managing remote Linux servers through terminal sessions
  • Power users customizing file associations for specific application workflows
Similar Projects
  • yazi - Rust-based TUI with strong image support but no native multi-column view
  • lf - Minimal Go file manager that lacks video/PDF previews and theming
  • ranger - Python-based manager with plugins but slower and less modern interface

Open Source Accelerates Modular AI Agent Harnesses and Skills 🔗

Developers are composing reusable skeletons, specialized tools, and runtimes to build sophisticated autonomous coding agents.

An emerging pattern in open source reveals a decisive shift toward modular architectures for AI agents. Rather than monolithic implementations, developers are separating the "harness" — the skeletal framework handling conversation loops, tool orchestration, and state management — from domain-specific capabilities and the underlying language model. This cluster of projects demonstrates a maturing ecosystem focused on composability, standardization, and extensibility.

An emerging pattern in open source reveals a decisive shift toward modular architectures for AI agents. Rather than monolithic implementations, developers are separating the "harness" — the skeletal framework handling conversation loops, tool orchestration, and state management — from domain-specific capabilities and the underlying language model. This cluster of projects demonstrates a maturing ecosystem focused on composability, standardization, and extensibility.

At the core are explicit harness implementations. lintsinghua/claude-code-book delivers a 420,000-word technical dissection of agent harness anatomy across 15 chapters, while yasasbanukaofficial/claude-code supplies the TypeScript skeleton emphasizing LLM tool-calling, terminal UIs, and workflow patterns. Anthropic's own anthropics/claude-code and anthropics/skills repositories further validate this approach by open-sourcing the foundational terminal agent and a public registry of reusable agent skills.

The pattern extends to specialized infrastructure. neomjs/neo offers a multi-threaded AI-native runtime with a persistent Scene Graph that lets agents introspect and mutate live application structures. platonai/Browser4 provides a coroutine-safe browser optimized for AI agents, and mvanhorn/last30days-skill demonstrates a single skill that synthesizes research across multiple web sources without API costs.

Skills registries have proliferated as the clearest evidence of standardization. sickn33/antigravity-awesome-skills curates over 800 battle-tested capabilities, alirezarezvani/claude-skills contributes more than 220 plugins spanning engineering to C-level advisory, and domain-specific collections like mukul975/Anthropic-Cybersecurity-Skills map to MITRE ATT&CK frameworks. Memory and observability components such as thedotmack/claude-mem and jarrodwatts/claude-hud address persistent context and visibility into agent reasoning.

Orchestration and self-improvement layers complete the picture. ruvnet/ruflo enables distributed multi-agent swarms, HKUDS/OpenSpace focuses on low-cost self-evolving agents, and karpathy/autoresearch shows agents autonomously conducting machine learning research. Trading frameworks like TauricResearch/TradingAgents and AI4Finance-Foundation/FinRobot illustrate how the same modular principles transfer beyond software development.

This cluster signals where open source is heading: toward an interoperable agent economy where sophisticated systems are assembled from shared, versioned components rather than built from scratch. The technical emphasis on standardized skill interfaces, secure execution environments, hierarchical context delivery, and real-time introspection suggests a future of highly specialized, collaborative AI teammates that evolve through community contribution.

Key technical patterns include:

  • Separation of harness, skills, and memory layers
  • Tool-calling standardization across coding agents
  • Domain-specific skill registries with governance
  • AI-native runtimes enabling structural introspection

The movement transforms agents from experimental demos into production-ready, extensible platforms.

Use Cases
  • Developers automate codebase refactoring using agent skills
  • Financial teams deploy multi-agent autonomous trading systems
  • Security experts integrate MITRE-mapped skills into agents
Similar Projects
  • ruvnet/ruflo - delivers distributed swarm orchestration extending basic harness patterns
  • neomjs/neo - adds persistent Scene Graph for real-time application mutation beyond standard skills
  • HKUDS/OpenSpace - focuses on self-evolving low-cost agents complementing modular skill registries

Open Source Builds Modular Skill Ecosystems for LLM Agents 🔗

From extensive skill libraries to agent orchestration platforms, developers are crafting reusable components that turn LLMs into autonomous coding and domain-specific actors.

The open source community is coalescing around a distinct new pattern in LLM tooling: the rapid development of modular skill libraries, agent harnesses, and CLI-first interfaces that equip large language models with executable capabilities. Rather than focusing solely on model weights or basic wrappers, these projects emphasize the “body” that makes an LLM agentic—standardized tools, workflows, and integration layers that allow models to interact with codebases, terminals, data sources, and external systems.

At the center of this movement sits anthropics/claude-code, a terminal-native agent that understands repositories, executes git operations, and performs routine coding tasks through natural language.

The open source community is coalescing around a distinct new pattern in LLM tooling: the rapid development of modular skill libraries, agent harnesses, and CLI-first interfaces that equip large language models with executable capabilities. Rather than focusing solely on model weights or basic wrappers, these projects emphasize the “body” that makes an LLM agentic—standardized tools, workflows, and integration layers that allow models to interact with codebases, terminals, data sources, and external systems.

At the center of this movement sits anthropics/claude-code, a terminal-native agent that understands repositories, executes git operations, and performs routine coding tasks through natural language. The surrounding ecosystem treats this capability as a foundation rather than a finished product. Repositories such as hesreallyhim/awesome-claude-code and sickn33/antigravity-awesome-skills curate hundreds of reusable skills covering engineering, cybersecurity, compliance, and marketing. These collections demonstrate a clear architectural choice: separate the LLM’s reasoning from a growing library of callable functions that can be swapped, extended, or versioned independently.

This pattern appears across implementations. yasasbanukaofficial/claude-code supplies an open TypeScript skeleton for tool-calling and agentic flows, while block/goose offers a Rust-based extensible agent that can install, edit, and test code with any backend LLM. badlogic/pi-mono packages a complete toolkit including CLI, TUI, Slack integration, and vLLM support, showing how the same modular philosophy scales beyond coding into operational environments.

The trend also surfaces in domain specialization and infrastructure. TradingAgents and ZhuLinsen/daily_stock_analysis illustrate multi-agent financial frameworks that combine market data, news, and LLM decision engines. NousResearch/hermes-agent and ruvnet/ruflo explore agent growth and swarm orchestration, while Arthur-Ficial/apfel proves the pattern works entirely on-device using Apple’s Foundation Models with zero cloud dependency.

Collectively these projects reveal where open source is heading technically: toward composable agent systems. By open-sourcing skills, harnesses, unified API proxies (router-for-me/CLIProxyAPI, QuantumNous/new-api), and educational resources like lintaishinghua/claude-code-book and rasbt/LLMs-from-scratch, the community is building a shared nervous system for LLMs. This modular approach decouples model providers from tool creators, accelerates domain adaptation, and favors local-first or hybrid deployments over monolithic SaaS agents. The result is an emerging stack that treats agents as programmable platforms rather than black-box chatbots.

Key technical characteristics include:

  • Standardized tool-calling interfaces that work across Claude, OpenAI, Gemini and local models
  • Emphasis on memory, scheduling, and multi-modal integration
  • Separation of orchestration logic from core LLM reasoning
  • Focus on terminal and IDE-native experiences that developers actually adopt

This cluster signals a maturing phase in open source AI where the competitive advantage shifts from training bigger models to engineering better bodies for them.

Use Cases
  • Developers automating codebase tasks with natural language agents
  • Financial teams deploying multi-agent LLM trading frameworks
  • Security engineers applying specialized cybersecurity skill libraries
Similar Projects
  • LangChain - provides general agent and tool abstractions that these skill libraries build upon
  • CrewAI - focuses on role-based multi-agent orchestration similar to the swarm patterns here
  • AutoGen - enables conversational multi-agent workflows that parallel the financial and coding agents

Deep Cuts

Unveiling Secret System Prompts of Major AI Models 🔗

Explore extracted instructions powering the behaviors of ChatGPT, Claude, Gemini, Grok and many more models

asgeirtj/system_prompts_leaks · Unknown · 402 stars

Deep in GitHub sits asgeirtj/system_prompts_leaks, a quietly essential repository that extracts and archives the actual system prompts guiding today's leading AI models. From ChatGPT variants and Claude Opus to Gemini Pro, Grok, and Perplexity, it captures the precise instructions that shape each model's personality, safety boundaries, reasoning patterns, and response style.

What elevates this project beyond simple curiosity is its focus on authenticity.

Deep in GitHub sits asgeirtj/system_prompts_leaks, a quietly essential repository that extracts and archives the actual system prompts guiding today's leading AI models. From ChatGPT variants and Claude Opus to Gemini Pro, Grok, and Perplexity, it captures the precise instructions that shape each model's personality, safety boundaries, reasoning patterns, and response style.

What elevates this project beyond simple curiosity is its focus on authenticity. These are not theoretical templates but the real directives issued at the system level, revealing how companies implement chain-of-thought reasoning, handle edge cases, and balance helpfulness with restrictions. The collection updates regularly as new model versions emerge, creating a living timeline of AI design evolution.

Builders should pay attention because these prompts serve as masterclasses in effective AI orchestration. Studying them reveals sophisticated techniques for maintaining consistent tone, enforcing ethical guidelines without killing creativity, and injecting specialized capabilities. The insights translate directly into more sophisticated custom agents, refined prompt strategies, and deeper understanding of why certain models behave the way they do.

For anyone building with LLMs, this repository functions as both reference library and inspiration engine, offering unfiltered access to the hidden logic powering the AI revolution.

Use Cases
  • AI developers dissecting official prompts from leading language models
  • Researchers tracking changes in AI safety instructions over time
  • Builders creating custom agents mimicking top model behaviors
Similar Projects
  • awesome-chatgpt-prompts - curates user examples rather than official system leaks
  • prompt-engineering-guide - teaches techniques instead of revealing actual model prompts
  • openai-cookbook - shares usage patterns but lacks extracted system instructions

Pi-Mono Delivers Complete AI Agent Toolkit in TypeScript 🔗

Unified platform featuring coding CLI, LLM APIs, UIs, Slack bots and more

badlogic/pi-mono · TypeScript · 392 stars

In the crowded AI tooling landscape, badlogic/pi-mono stands out as a remarkably cohesive TypeScript toolkit that brings together everything needed to build, run, and deploy intelligent agents.

At its heart is a powerful coding agent CLI that turns natural language requests into executable code right in your terminal. This pairs with a unified LLM API that abstracts away differences between providers, letting developers switch models without rewriting integration code.

In the crowded AI tooling landscape, badlogic/pi-mono stands out as a remarkably cohesive TypeScript toolkit that brings together everything needed to build, run, and deploy intelligent agents.

At its heart is a powerful coding agent CLI that turns natural language requests into executable code right in your terminal. This pairs with a unified LLM API that abstracts away differences between providers, letting developers switch models without rewriting integration code. The included TUI and web UI libraries make it simple to create interactive interfaces, whether for local experimentation or production dashboards.

The toolkit extends into real-world collaboration with a ready-to-use Slack bot that can review pull requests, answer questions, or automate routine tasks. For those running local models, built-in support for vLLM pods delivers efficient inference at scale without leaving the ecosystem.

What makes this project special is its “batteries-included” philosophy. Instead of gluing together separate libraries for agents, interfaces, and deployment, pi-mono offers a single, consistent foundation. TypeScript developers can move quickly from prototype to production while keeping their entire AI stack in one familiar language.

The potential here is significant: autonomous coding assistants, team-facing AI coworkers, and sophisticated agent workflows all become dramatically easier to ship. For builders tired of fragmented tooling, pi-mono feels like a genuine shortcut to sophisticated AI applications.

Use Cases
  • Software engineers building autonomous coding agents via CLI
  • Teams integrating multiple LLMs with one unified API
  • Organizations deploying AI-powered Slack bots for workflows
Similar Projects
  • LangChain.js - offers LLM abstractions but lacks native CLI agents and Slack integration
  • Vercel AI SDK - excels at streaming responses yet provides no TUI or vLLM support
  • CrewAI - focuses on multi-agent orchestration while missing pi-mono's full TypeScript UI suite

Quick Hits

harness-books Harness Books delivers practical Python guides and examples for building robust test harnesses, helping developers create reliable automated testing suites. 1.1k
autoagent AutoAgent uses autonomous AI agents to automate complex harness engineering tasks in Python, accelerating design, optimization, and validation workflows. 2.5k
hermes-agent The agent that grows with you 25.3k

Why rasbt's LLMs-from-scratch Matters More Than Ever for Builders 🔗

The PyTorch project enables developers to code GPT-like models step by step, providing crucial insights in today's complex AI landscape

rasbt/LLMs-from-scratch · Jupyter Notebook · 90k stars Est. 2023

Three years after its creation, rasbt/LLMs-from-scratch continues to serve as a foundational resource for developers who want to move beyond API calls and understand how large language models actually function. The repository delivers the complete code for building, pretraining, and finetuning a GPT-like LLM in PyTorch, following the exact progression outlined in Sebastian Raschka's book of the same name.

The project structures its content as a series of Jupyter notebooks that walk through every technical component.

Three years after its creation, rasbt/LLMs-from-scratch continues to serve as a foundational resource for developers who want to move beyond API calls and understand how large language models actually function. The repository delivers the complete code for building, pretraining, and finetuning a GPT-like LLM in PyTorch, following the exact progression outlined in Sebastian Raschka's book of the same name.

The project structures its content as a series of Jupyter notebooks that walk through every technical component. Readers implement tokenization, embedding layers, causal self-attention, multi-head attention mechanisms, feed-forward networks with GELU activations, layer normalization, and residual connections. Each notebook builds directly on the previous one, creating a working decoder-only transformer that performs next-token prediction.

What makes the codebase particularly valuable now is its treatment of both pretraining from raw text and practical finetuning. The code demonstrates how to load weights from larger pretrained models, allowing developers to adapt substantial models to specific tasks without starting from random initialization. This mirrors the real development process used to create production systems like ChatGPT while remaining small enough for educational use on consumer hardware.

The implementation emphasizes clarity over optimization. Instead of abstracting key operations behind library calls, the notebooks show the mathematical operations explicitly. This approach helps builders debug training instabilities, understand the impact of hyperparameter choices, and recognize why certain architectural decisions were made in modern LLMs.

Recent maintenance has kept the repository compatible with current PyTorch versions and added improved examples for instruction tuning. The README provides precise setup instructions, recommending users clone the repository with git clone --depth 1 https://github.com/rasbt/LLMs-from-scratch.git and use a Markdown previewer for documentation.

For builders, the project solves a growing problem: the increasing distance between high-level frameworks and the underlying mechanics of generative AI. As organizations deploy LLMs in critical applications, the ability to inspect, modify, and troubleshoot these systems becomes essential rather than optional.

The notebooks also cover dataset preparation, efficient training loops, and evaluation methods. This complete pipeline gives developers the confidence to experiment with novel modifications rather than treating models as immutable black boxes.

Key technical elements covered include:

  • Causal attention masking for autoregressive generation
  • Positional encoding strategies
  • AdamW optimization with proper weight decay
  • Gradient clipping and learning rate scheduling
  • Methods for loading external pretrained weights

In an era of rapid model releases and competing architectures, rasbt/LLMs-from-scratch grounds developers in first principles. Those who work through the notebooks gain the mental models necessary to evaluate new research papers, contribute to open-source LLM projects, and make informed decisions about when to build versus when to adapt existing systems.

(Word count: 378)

Use Cases
  • University professors teaching transformer architecture to students
  • Researchers prototyping small language models for experiments
  • Engineers finetuning pretrained models on domain data
Similar Projects
  • karpathy/nanoGPT - Delivers a more minimal single-file implementation focused on training speed rather than step-by-step education
  • The Annotated Transformer - Provides detailed mathematical commentary on the original paper but lacks comprehensive finetuning code
  • lit-gpt - Offers optimized implementations across multiple model families but assumes more prior LLM knowledge than rasbt's project

More Stories

LLM Engineering Repository Refreshed With New Course Weeks 🔗

Updated materials replace all prior content for current LLM development practices

ed-donner/llm_engineering · Jupyter Notebook · 5.5k stars Est. 2024

Ed Donner's llm_engineering repository has completed its major December 2025 refresh, replacing every week of the original course with new material. The update reflects changes in model availability and engineering patterns since the repo first appeared in 2024, while preserving access to legacy code through the original branch.

Users run git fetch followed by git checkout original to revert to the version matching the initial video series.

Ed Donner's llm_engineering repository has completed its major December 2025 refresh, replacing every week of the original course with new material. The update reflects changes in model availability and engineering patterns since the repo first appeared in 2024, while preserving access to legacy code through the original branch.

Users run git fetch followed by git checkout original to revert to the version matching the initial video series. The main branch now begins with Llama 3.2 projects, explicitly advising against Llama 3.3 on consumer hardware because of its memory requirements. Subsequent weeks advance from basic prompting through retrieval-augmented generation to multi-agent systems, with each notebook building directly on the previous week's code.

The materials consist entirely of Jupyter Notebooks that let engineers experiment with local model calls, vector stores, and agent orchestration. Supporting resources, including slides, links and an FAQ, live at edwarddonner.com. Donner continues to offer direct support via email at ed@edwarddonner.com and through the Udemy course platform.

The refreshed repository keeps the eight-week progression relevant as new open-source models emerge, giving practitioners concrete, incremental experience rather than isolated examples.

Use Cases
  • Software engineers building incremental LLM projects in Jupyter notebooks
  • Developers implementing RAG pipelines through structured weekly exercises
  • AI practitioners creating local multi-agent systems with Llama models
Similar Projects
  • fastai/fastbook - delivers similar project-based learning but for deep learning
  • huggingface/course - provides structured LLM lessons without full weekly notebooks
  • langchain-ai/langchain - supplies production libraries instead of teaching core engineering

Streamlit 1.56 Enhances Data App Navigation and Widgets 🔗

New release adds menu buttons, media columns and improved table controls for Python developers

streamlit/streamlit · Python · 44.1k stars Est. 2019

Streamlit has released version 1.56.0, adding practical improvements to its framework for converting Python scripts into interactive web applications.

Streamlit has released version 1.56.0, adding practical improvements to its framework for converting Python scripts into interactive web applications. The update targets navigation, data presentation and media support, addressing common requests from developers building dashboards, reports and chat tools.

Key additions include the ability to specify visible items in st.navigation via the expanded parameter and on_click rerun support for st.link_button. File type shortcuts now simplify st.file_uploader and st.chat_input. Developers can programmatically control selections in st.dataframe, while st.column_config gains AudioColumn and VideoColumn types.

A new st.menu_button widget expands UI options. The st.table component supports hide_index and hide_header parameters, and st.chat_input accepts a height setting for better layout control. These changes build on Streamlit's live-editing model, where script modifications update the app instantly without restarting.

The release maintains the project's core approach: writing standard Python that becomes shareable web apps in minutes. Its Community Cloud platform continues to offer free deployment and management. Six years after its initial release, Streamlit remains focused on reducing the time between Python analysis and interactive applications used across machine learning, finance and scientific workflows.

(178 words)

Use Cases
  • Data scientists creating interactive dashboards for machine learning model exploration
  • Business analysts generating dynamic financial reports with real-time data updates
  • Researchers building multi-page applications for scientific data visualization
Similar Projects
  • Gradio - simpler ML demo focus with fewer enterprise data tools
  • Dash - requires more code but offers greater customization depth
  • Panel - provides reactive components using different widget syntax

PyTorch Lightning 2.6.1 Refines Large Model Scaling 🔗

Release removes Python 3.9 support and adds logger integration for modern AI workflows

Lightning-AI/pytorch-lightning · Python · 31k stars Est. 2019

PyTorch Lightning has shipped version 2.6.1, updating the framework that lets developers pretrain and finetune AI models of any size on one or 10,000+ GPUs with no changes to core model code.

PyTorch Lightning has shipped version 2.6.1, updating the framework that lets developers pretrain and finetune AI models of any size on one or 10,000+ GPUs with no changes to core model code.

The library structures PyTorch projects to remove repetitive engineering. It automates backpropagation, mixed precision, device placement, and distributed strategies while leaving model logic untouched. Users define training steps inside a LightningModule; the Trainer handles the rest. Two packages exist: the full PyTorch Lightning framework for structured workflows and Lightning Fabric for granular control.

This release drops support for Python 3.9 following its end-of-life, deprecates the to_torchscript method to align with PyTorch's direction, and adds method chaining to LightningModule.freeze() and unfreeze(). It also introduces litlogger integration. Fixes address hyperparameter inheritance, LightningDataModule checkpoint restoration, and ModelParallelStrategy compatibility with torch.compile.

Lightning can run on local hardware, private clusters, or through Lightning Cloud for managed GPU resources without infrastructure setup. A companion library, LitServe, enables custom inference servers written entirely in Python.

As model sizes continue growing, the framework's separation of science from engineering remains relevant for research and production teams.

Use Cases
  • AI researchers scaling large language models across multi-node GPU clusters
  • Engineers finetuning vision models with automated distributed training code
  • Development teams deploying production deep learning systems on cloud infrastructure
Similar Projects
  • Hugging Face Accelerate - provides distributed utilities with lighter abstraction
  • DeepSpeed - focuses on memory optimization for massive parameter models
  • PyTorch Ignite - offers training helpers but with less opinionated structure

Quick Hits

ultralytics Ultralytics YOLO delivers blazing-fast object detection and tracking models that let builders ship production-ready computer vision apps in minutes. 55.4k
FinRL FinRL equips builders with reinforcement learning frameworks to develop, train, and backtest profitable AI trading strategies in financial markets. 14.7k
phoenix Phoenix gives builders powerful observability tools to trace, evaluate, and debug LLM applications with interactive visualizations and metrics. 9.2k
diffusers Diffusers lets builders generate images, video, and audio with state-of-the-art diffusion models using simple PyTorch APIs. 33.3k
netron Netron instantly visualizes neural networks and ML models across 20+ formats, helping builders inspect and debug architectures at a glance. 32.7k

MAVROS 2.14.0 Tightens MAVLink Requirements for ROS UAV Work 🔗

New release adds PX4 offboard control example while enforcing updated protocol version for modern autonomy stacks

mavlink/mavros · C++ · 1.1k stars Est. 2013 · Latest: 2.14.0

MAVROS has shipped version 2.14.0, introducing a breaking change that requires mavlink version 2025.

MAVROS has shipped version 2.14.0, introducing a breaking change that requires mavlink version 2025.12.12 or newer. The update also delivers the project's first official PX4 offboard control example script, contributed by new developer Tuxliri.

The change reflects the project's ongoing alignment with evolving MAVLink standards. Since its creation in 2013, MAVROS has served as the de facto gateway between the MAVLink protocol used by flight controllers and the ROS ecosystem. It translates between MAVLink packets and ROS topics, services, and parameters, while providing a proxy interface for ground control stations.

The project is organized into several packages. The core mavros node handles message routing, parameter synchronization, and plugin management. mavros_extras supplies additional nodes and plugins for specialized tasks. libmavconn offers a standalone library that works outside ROS, and mavros_msgs defines the custom messages and services that bridge the two worlds.

Recent releases have focused on ROS2 compatibility. Version 2.0.0 introduced the first ROS2 support, while 2.6.0 dropped end-of-life distributions and now targets Humble, Iron, and Rolling. The 2.14.0 release continues this modernization by updating GitHub Actions dependencies, bumping actions/cache from version 4 to 5.

Technical users will notice the altitude conversion logic that has been stable since 2017. MAVROS uses GeographicLib to translate between AMSL altitudes reported by flight controllers and the WGS84 frame expected by ROS navigation stacks. This detail matters for any project combining GPS-based navigation with ROS mapping or planning nodes.

The new offboard control example arrives at a useful moment. Offboard mode lets external computers compute setpoints and send them to the flight controller at high frequency, a pattern common in research and commercial autonomous systems. The script provides a concrete starting point for developers moving from simple waypoint missions to more sophisticated behaviors computed in ROS.

Teams currently using older MAVLink versions must update before adopting 2.14.0. The requirement ensures access to recent protocol improvements in message efficiency and feature support. Those running PX4 or ArduPilot on ROS2 should review their dependency chains and test the new example in SITL before deploying to hardware.

MAVROS remains essential infrastructure for anyone building unmanned systems that combine ROS-based intelligence with production flight controllers. The latest release reinforces its role without dramatic redesign, focusing instead on compatibility and practical examples that reduce integration friction.

(Word count: 378)

Use Cases
  • Drone engineers connecting PX4 to ROS2 autonomy nodes
  • Researchers implementing offboard control in UAV platforms
  • Developers building custom MAVLink plugins for GCS proxy
Similar Projects
  • px4_ros_com - Uses uXRCE-DDS for direct PX4-ROS2 communication without MAVLink translation
  • mavlink-router - Routes MAVLink traffic efficiently but offers no ROS topic integration
  • ardupilot_ros - Provides ArduPilot-specific ROS bridge with narrower scope than MAVROS

More Stories

Evo Simplifies Evaluation of Odometry and SLAM Algorithms 🔗

Updated ROS2 support keeps Python package essential for modern trajectory analysis

MichaelGrupp/evo · Python · 4.2k stars Est. 2017

Eight years after its creation, MichaelGrupp/evo remains the standard Python package for evaluating odometry and SLAM output. The project received its latest updates in April 2026, ensuring continued compatibility with Python 3.10+ and current robotics stacks.

Eight years after its creation, MichaelGrupp/evo remains the standard Python package for evaluating odometry and SLAM output. The project received its latest updates in April 2026, ensuring continued compatibility with Python 3.10+ and current robotics stacks.

The tool ingests TUM trajectory files, KITTI pose files, EuRoC MAV data, and ROS or ROS2 bagfiles containing geometry_msgs/PoseStamped, TransformStamped, or nav_msgs/Odometry messages. It supplies common routines for trajectory association, alignment, and scale adjustment required by monocular systems.

Flexible output distinguishes evo from rigid dataset benchmarks. Users generate plots, export tables to Excel, or produce LaTeX figures directly from the command line. The modular core and tools libraries allow custom extensions while the CLI handles most daily tasks without additional coding. Benchmarks show it executes faster than comparable Python evaluation suites.

Because it avoids replicating any single dataset protocol, teams can apply the same commands across KITTI, TUM, and EuRoC experiments. For developers migrating SLAM pipelines to ROS2 or processing long-duration autonomous runs, evo delivers consistent metrics without rewriting evaluation code.

Installation uses the familiar pip install evo command, with editable installs available for those tracking the latest changes from source.

Use Cases
  • Robotics engineers compare SLAM trajectories across KITTI and TUM
  • ROS2 developers validate odometry accuracy from bagfile messages
  • Autonomous teams compute alignment metrics on EuRoC MAV data
Similar Projects
  • kitti-eval - limited to KITTI poses while evo supports multiple formats
  • tum-rgbd-tools - dataset-specific protocols unlike evo's general framework
  • rpg_trajectory_evaluation - fewer output options and no ROS2 bag support

Thunderbots Upgrades Robot Soccer AI Control System 🔗

Version 1.2.1 adds automated game control and enhanced logging capabilities

UBC-Thunderbots/Software · C++ · 63 stars Est. 2018

UBC Thunderbots has issued v1.2.1 of its robot soccer software, introducing practical enhancements to the C++ codebase that controls its autonomous fleet.

UBC Thunderbots has issued v1.2.1 of its robot soccer software, introducing practical enhancements to the C++ codebase that controls its autonomous fleet. The update adds support for an --enable_autogc flag, providing an option to automate game control processes. This feature is expected to simplify operations during both development and competitive play.

Improvements to stat recording now accommodate longer durations, allowing teams to capture detailed performance metrics without interruption. Such capabilities are essential for thorough evaluation ahead of major tournaments.

Additionally, a new schema for verbose logs standardizes how diagnostic data is structured and stored. This change should make it easier for engineers to analyze system behavior and debug issues efficiently.

The software forms the backbone of the Thunderbots' RoboCup Small Size League efforts, managing everything from individual robot control to collective team strategies. Its architecture supports sophisticated AI that enables the robots to play soccer with remarkable autonomy.

Contributors must adhere to the project's style guide and consult architecture documents before making changes. The release notes highlight the collaborative effort behind these refinements, with contributions from multiple team members.

These incremental advances demonstrate the sustained commitment required to compete in the rapidly progressing field of robotic sports. By focusing on these technical details, the team ensures their robots can perform reliably in the demanding environment of international robot soccer competitions.

Use Cases
  • UBC students developing AI algorithms for autonomous soccer robots
  • Engineers analyzing long-duration performance statistics from robot tests
  • Developers integrating schema for structured verbose logging analysis
Similar Projects
  • ER-Force/Software - provides comparable real-time C++ control with different tactical priorities
  • TIGERs-Mannheim - focuses on advanced vision processing versus Thunderbots' logging emphasis
  • grSim - delivers simulation framework rather than production robot firmware integration

Kornia Shifts Toward End-to-End Vision Models 🔗

Differentiable library integrates VLMs and VLAs for unified spatial AI pipelines

kornia/kornia · Python · 11.2k stars Est. 2018

Kornia is redirecting development toward end-to-end vision models. The PyTorch-based library is now prioritising integration of state-of-the-art Vision Language Models and Vision Language Agents to create complete spatial AI solutions within existing deep learning workflows.

The move extends its established collection of more than 500 differentiable operators.

Kornia is redirecting development toward end-to-end vision models. The PyTorch-based library is now prioritising integration of state-of-the-art Vision Language Models and Vision Language Agents to create complete spatial AI solutions within existing deep learning workflows.

The move extends its established collection of more than 500 differentiable operators. These continue to support core image processing tasks including Gaussian and Sobel filters, affine and homography transformations, histogram equalisation, and edge detection with Canny and Laplacian methods. All operations remain GPU-accelerated and fully differentiable for seamless inclusion in training pipelines.

Recent release v0.8.2 brings documentation updates, dependency bumps and a community migration from previous channels to Discord. The library maintains its augmentation tools such as AugmentationSequential, RandAugment and TrivialAugment, alongside pre-trained models for face detection with YuNet, feature matching with LoFTR and LightGlue, and segmentation with SAM.

This evolution addresses the demand for frameworks that combine low-level geometric operations with high-level semantic understanding in a single differentiable environment. Robotics and autonomous systems developers can now prototype complete vision pipelines without switching between separate libraries for classical processing and modern multimodal models.

The changes reflect broader industry movement toward unified vision systems that support both precise spatial reasoning and language-guided interpretation.

Use Cases
  • Robotics engineers building differentiable spatial AI pipelines
  • Researchers training VLMs with geometric image transformations
  • Autonomous vehicle teams implementing end-to-end vision agents
Similar Projects
  • torchvision - provides standard transforms but lacks Kornia's geometric differentiability
  • OpenCV - supplies traditional vision algorithms without native PyTorch integration
  • timm - focuses on image classification models but omits low-level operators

Quick Hits

autoware_universe Modular autonomous driving stack delivering perception, planning, and control tools for building self-driving vehicles. 1.6k
PX4-Autopilot Advanced autopilot software equipping drones with precise flight control and autonomous mission capabilities. 11.4k
auto-apms Versatile ROS 2 framework for creating complex robot behaviors using intuitive behavior trees. 93
rtabmap Real-time RGB-D SLAM library enabling accurate mapping, localization, and loop closure for robots. 3.7k
vortex-auv Specialized GNC software for AUVs optimized for high-performance guidance in underwater robotics competitions. 119

Open Standard Equips AI Agents With Professional Cybersecurity Skills 🔗

Repository maps 753 structured skills to MITRE ATT&CK and NIST CSF 2.0 for use across Claude Code, Copilot and other AI platforms

mukul975/Anthropic-Cybersecurity-Skills · Python · 4k stars 1mo old · Latest: v1.1.0

AI agents have struggled to access the detailed, practitioner-level knowledge required for complex security work. A new open-source project directly addresses this gap by delivering a standardized collection of cybersecurity capabilities that agents can discover and execute on demand.

The repository contains 753 structured skills spanning 26 domains, including penetration testing, digital forensics and incident response (DFIR), threat intelligence, cloud security, supply chain security and operational technology.

AI agents have struggled to access the detailed, practitioner-level knowledge required for complex security work. A new open-source project directly addresses this gap by delivering a standardized collection of cybersecurity capabilities that agents can discover and execute on demand.

The repository contains 753 structured skills spanning 26 domains, including penetration testing, digital forensics and incident response (DFIR), threat intelligence, cloud security, supply chain security and operational technology. Each skill follows the agentskills.io open standard: a YAML frontmatter header for machine discovery, a Markdown document containing step-by-step execution guidance, and reference files providing deeper technical context.

All skills are mapped to the full MITRE ATT&CK Enterprise matrix—covering all 14 tactics and more than 200 techniques—and aligned with NIST CSF 2.0. This gives AI agents a structured knowledge representation equivalent to what senior security engineers carry, enabling consistent and accurate application across diverse scenarios.

The collection works with more than 20 platforms, including Claude Code, GitHub Copilot, OpenAI Codex CLI, Cursor, Gemini CLI and various LangChain implementations. Installation takes seconds using npx skills add mukul975/Anthropic-Cybersecurity-Skills or through the Claude Code plugin marketplace. No API keys or complex configuration are required.

Version 1.1.0 added 30 new skills that reflect current threat priorities. New capabilities include:

  • AI Security: detecting-ai-model-prompt-injection-attacks, implementing-llm-guardrails-for-security
  • Supply Chain: analyzing-sbom-for-supply-chain-vulnerabilities, detecting-typosquatting-packages-in-npm-pypi
  • Firmware: analyzing-uefi-bootkit-persistence, performing-firmware-extraction-with-binwalk
  • Cloud Native: implementing-aws-nitro-enclave-security, implementing-ebpf-security-monitoring
  • Threat Hunting: hunting-for-dcom-lateral-movement, detecting-command-and-control-over-dns

These additions allow agents to assist with tasks such as auditing Kubernetes RBAC, reversing .NET malware, deploying Active Directory honeytokens, and preparing for SOC2 Type 2 audits.

For builders, the project solves a fundamental limitation in current AI security tooling: the absence of production-grade, machine-readable security knowledge. By standardizing this expertise, the repository enables more reliable agent behavior in real security operations. Written in Python and licensed under Apache 2.0, it invites contributions from both security practitioners and AI developers seeking to expand the library.

The approach represents a practical step toward integrating deep domain expertise into autonomous and semi-autonomous security agents. (378 words)

Use Cases
  • Red team operators perform AI-assisted penetration testing
  • DFIR teams execute memory forensics with AI guidance
  • Cloud engineers audit Kubernetes security using AI agents
Similar Projects
  • Atomic Red Team - provides ATT&CK atomic tests but lacks the YAML/Markdown standard for AI agent discovery and execution
  • MITRE CALDERA - automates adversary emulation yet offers no comprehensive library of step-by-step skills for LLMs
  • Sigma - supplies detection rules without structured execution guidance designed for AI agent consumption

More Stories

Radare2 6.1.2 Sharpens Binary Analysis Engine 🔗

Brainroot release refines function signatures and basic block handling for precision

radareorg/radare2 · C · 23.4k stars Est. 2012

Radare2 has shipped version 6.1.2, codenamed "Brainroot," with 224 commits from 15 contributors focused on tightening its core analysis capabilities.

Radare2 has shipped version 6.1.2, codenamed "Brainroot," with 224 commits from 15 contributors focused on tightening its core analysis capabilities.

The update addresses several practical pain points. It now preserves anal.timeout settings across iterators, adds APIs to get and set function signatures, and fixes selection of overlapped functions in pdc. Invalid code checks have been unified, stopping filler-prefix blocks earlier to avoid wasted cycles.

To improve stability, the tool no longer crashes when hitting large basic-block limits and sets a new 64KB default. The maximum basic-block size has been reduced from 512K to 8K, while jump-table validation checks have been strengthened. These changes matter for users regularly processing complex or obfuscated binaries where small inaccuracies compound quickly.

Now in its 14th year, the LGPLv3 command-line framework continues to support editing local files, viewing kernel memory, and debugging both locally and over gdb/windbg. Scripting via the embedded JavaScript interpreter or r2pipe remains central, as does the plugin ecosystem delivered through r2pm.

Installation follows the established path: clone the repository and run sys/install.sh, with meson and make builds both supported. The release keeps radare2's reputation for low-level control intact while delivering the incremental reliability that long-time users expect.

Word count: 178

Use Cases
  • Security researchers dissecting malware binaries on Unix systems
  • Forensics experts examining kernel memory and disk images
  • Developers debugging remote programs via gdb integration
Similar Projects
  • Ghidra - NSA open-source toolkit with graphical interface
  • IDA Pro - Commercial disassembler with extensive plugin support
  • Binary Ninja - Modern RE platform focused on intermediate languages

Proxy List Refreshed With 7,868 Fresh Entries 🔗

Long-running repository continues supplying SOCKS and HTTP proxies updated as of April 2026

TheSpeedX/PROXY-List · Unknown · 5.4k stars Est. 2018

TheSpeedX/PROXY-List received its latest update on 5 April 2026, bringing the total number of collected proxies to 7,868. The project, first created in 2018, maintains separate plain-text lists of free public proxies scraped from across the internet and refreshed periodically for easy consumption.

The repository provides three distinct lists: SOCKS5, SOCKS4, and HTTP.

TheSpeedX/PROXY-List received its latest update on 5 April 2026, bringing the total number of collected proxies to 7,868. The project, first created in 2018, maintains separate plain-text lists of free public proxies scraped from across the internet and refreshed periodically for easy consumption.

The repository provides three distinct lists: SOCKS5, SOCKS4, and HTTP. Developers can fetch them directly via raw GitHub URLs:

  • https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/socks5.txt
  • https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/socks4.txt
  • https://raw.githubusercontent.com/TheSpeedX/SOCKS-List/master/http.txt

The maintainer explicitly states the lists are for educational purposes only, notes they do not control the proxies' reliability or legality, and requests credits, stars, and follows from those who use the data. A companion validation tool is also referenced for checking which SOCKS proxies are currently operational.

In an environment of increasing IP-based blocking and privacy scrutiny, the project's consistent updates offer builders a regular starting point for testing network conditions, though free proxies remain inherently unstable and should be validated before production use. The simple text-file format ensures straightforward integration into scripts and testing pipelines.

(178 words)

Use Cases
  • Developers rotating proxies in web scraping pipelines
  • Security researchers testing anonymity tool configurations
  • Penetration testers simulating distributed network traffic
Similar Projects
  • monosans/proxy-list - validates proxies before inclusion
  • Jetkai/proxy-list - updates more frequently with health checks
  • clarketm/proxy-list - automates collection with additional metadata

IntelOwl v6.6.0 Upgrades PostgreSQL and Analysis Tools 🔗

Release adds Machofile support, URLScan visualizer and database improvements for scaled operations

intelowlproject/IntelOwl · Python · 4.5k stars Est. 2019

IntelOwl has shipped version 6.6.0, delivering concrete performance and capability upgrades to its open-source threat intelligence platform.

IntelOwl has shipped version 6.6.0, delivering concrete performance and capability upgrades to its open-source threat intelligence platform.

The most notable change is the migration to PostgreSQL v18, closing a long-standing scalability issue for organizations running high volumes of concurrent analyses. A new Machofile analyzer now handles Mach-O binaries, extending coverage to macOS malware formats that previously required separate tooling.

Additional updates include an improved InQuest analyzer with more accurate observable type detection, a URLScan.io Crawl Results Visualizer that enhances the built-in GUI, and a management command for checking new releases. Fixes address HTTP timeouts in the Yeti connector, correct its v2 API usage, and add background colors to TLP badges in the job API. Unit tests now run significantly faster.

IntelOwl combines enrichment of files and observables — IPs, domains, URLs, hashes — through a single REST API call. Its modular plugin system supports analyzers that query external services or run local tools such as Yara, connectors that push data to MISP or OpenCTI, pivots that chain related analyses, playbooks for repeatable workflows, and ingestors that consume external streams.

Written in Python with Django, the project remains a practical automation layer for SOC and DFIR teams that need consistent, scalable access to threat data without manual repetition across disparate tools.

Word count: 178

Use Cases
  • SOC analysts enriching IOCs from multiple sources simultaneously
  • DFIR teams analyzing Mach-O malware binaries at scale
  • Threat hunters exporting enriched data to MISP instances
Similar Projects
  • MISP - focuses on indicator sharing while IntelOwl emphasizes analysis
  • OpenCTI - knowledge base that IntelOwl connects to via plugins
  • SpiderFoot - OSINT automation lacking IntelOwl's enterprise plugin framework

Quick Hits

cilium Cilium harnesses eBPF for high-performance networking, security, and observability that scales effortlessly in cloud-native environments. 24k
hacktricks HackTricks delivers a battle-tested collection of CTF tricks, real-world exploits, and pentesting techniques every hacker should know. 11.1k
HackBrowserData HackBrowserData extracts and decrypts passwords, cookies, history and more from every major browser across Windows, macOS, and Linux. 13.6k
sops Sops provides simple, flexible encryption for secrets in config files while staying friendly with version control and automation. 21.4k
bbot BBOT recursively scans the internet to map targets, uncover subdomains, and chain OSINT data for hackers and red teamers. 9.6k

NullClaw Delivers Smallest Autonomous AI Assistant Infrastructure in Zig 🔗

Static 678 KB binary boots in milliseconds on $5 hardware while supporting multiple channels and model providers with minimal resources

nullclaw/nullclaw · Zig · 7k stars 1mo old · Latest: v2026.4.4

NullClaw offers developers a radically different foundation for building AI assistants. Written entirely in Zig, the project compiles to a static binary that measures just 678 KB and consumes roughly 1 MB of RAM at runtime. It requires nothing beyond libc, boots in milliseconds, and runs comfortably on any low-cost single-board computer.

NullClaw offers developers a radically different foundation for building AI assistants. Written entirely in Zig, the project compiles to a static binary that measures just 678 KB and consumes roughly 1 MB of RAM at runtime. It requires nothing beyond libc, boots in milliseconds, and runs comfortably on any low-cost single-board computer.

The system functions as fully autonomous AI assistant infrastructure. It accepts input through CLI, Telegram, and Discord interfaces while connecting to local and remote model providers including Ollama, vLLM, and custom endpoints. This design eliminates the heavy Python runtimes and complex dependency graphs that characterize most agent frameworks, giving builders precise control over resource usage and deployment targets.

Recent releases demonstrate growing production capability. Contributors wired up cron-based session_target routing for reliable agent job scheduling. Multi-modal support now allows the assistant to process images alongside text. Shell execution has been fixed to prevent hanging on interactive commands, while inbound message handling adds debouncing across all channels to manage rapid-fire input from Telegram, Discord, and CLI.

Reliability improvements address real-world deployment challenges. The system now includes model fallback configurations and better error classification that correctly handles message fields and unsupported image patterns. Custom vLLM and Qwen endpoints receive proper reasoning support, and explicit custom-url model references are resolved correctly. Documentation now covers these reliability patterns in detail.

The architecture prioritizes minimalism without sacrificing functionality. Builders configure behavior through straightforward command-line tools and a defined gateway API. A ReleaseSmall build can be reproduced locally with zig build -Doptimize=ReleaseSmall, after which the binary can be measured directly with /usr/bin/time -l.

For developers working in constrained environments, NullClaw solves the persistent problem of AI infrastructure bloat. Traditional agent systems often demand hundreds of megabytes of memory and seconds of startup time. This project inverts those numbers, delivering an autonomous system that fits on minimal hardware while maintaining extensibility through its provider abstraction layer and session management.

The approach reflects Zig's strengths in systems programming applied to AI orchestration. By staying close to the metal and avoiding runtime interpreters, NullClaw creates a new category of lightweight, self-contained assistant infrastructure that can be deployed anywhere from edge devices to servers with identical binaries.

Key technical specifications include:

  • 678 KB static binary with zero runtime dependencies beyond libc
  • Approximately 1 MB RAM footprint during operation
  • Millisecond boot times on commodity hardware
  • Native support for multiple messaging channels and model providers
  • Built-in cron scheduling and multi-modal processing

The project continues to refine operational characteristics while preserving its core constraints on size and resource usage. For builders who value efficiency and autonomy over framework features, NullClaw provides a compelling foundation.

Use Cases
  • Engineers running AI agents on embedded single-board computers
  • Developers building personal assistants with strict memory limits
  • Teams integrating autonomous agents across Telegram and Discord
Similar Projects
  • Auto-GPT - provides autonomous agents in Python but requires far higher memory and dependencies than NullClaw's static binary
  • CrewAI - enables multi-agent orchestration with substantially larger resource footprint compared to Zig's minimal approach
  • LangGraph - offers sophisticated agent workflows in Python while lacking NullClaw's millisecond startup and 678 KB size

More Stories

Pake V3.11 Refines Web-to-Desktop Packaging 🔗

New flags add auto-install, managed windows and media permissions across platforms

tw93/Pake · Rust · 47.6k stars Est. 2022

Pake has shipped version 3.11.0 with targeted improvements to its Rust and Tauri-based CLI for converting any webpage into a native desktop application.

Pake has shipped version 3.11.0 with targeted improvements to its Rust and Tauri-based CLI for converting any webpage into a native desktop application.

The update introduces a --install flag on macOS that automatically places the finished bundle in /Applications after a successful build, enabling immediate Spotlight access. Developers can now use the --new-window option to contain popup and OAuth flows inside managed webview windows rather than leaking them to the system browser, keeping authentication sequences inside the app.

Media-heavy sites benefit from explicit --camera and --microphone flags that add the necessary entitlements only when requested. A new --identifier parameter lets users set custom bundle identifiers, eliminating conflicts when multiple wrappers target the same domain. Icon resolution gained a dashboard-icons CDN fallback for self-hosted and authenticated sites.

On Linux, auto-generated identifiers now always begin with a letter, fixing D-Bus violations that previously caused AppImage crashes on launch. The core value remains unchanged: applications typically ship at around 5 MB, with significantly lower memory usage than Electron equivalents and one-command packaging.

These incremental changes address practical friction points for regular users packaging services such as ChatGPT, Gemini, YouTube Music and internal tools.

Pake continues to demonstrate that lightweight desktop wrappers do not require complex configuration or heavy runtimes.

Use Cases
  • Developers packaging ChatGPT as lightweight native desktop client
  • Mac users auto-installing web apps directly to Applications folder
  • Teams containing OAuth flows inside managed webview windows
Similar Projects
  • Nativefier - Electron-based alternative with much larger bundle sizes
  • Wails - Go-based web desktop tool lacking Tauri's Rust performance
  • Neutralino.js - lightweight option but without Pake's CLI simplicity

Ruff 0.15.9 Refines Python Linting Rules 🔗

Latest release adds preview checks and fixes bugs across Flake8 plugins

astral-sh/ruff · Rust · 46.8k stars Est. 2022

Ruff’s latest version, released on April 2, delivers incremental but meaningful improvements to its Rust-based linting and formatting engine. Version 0.15.

Ruff’s latest version, released on April 2, delivers incremental but meaningful improvements to its Rust-based linting and formatting engine. Version 0.15.9 introduces preview support for flagging annotated variable redeclarations as F811 and permits dunder-named assignments in non-strict mode under RUF067.

Several bug fixes address edge cases that users have encountered in real codebases. The flake8-errmsg plugin no longer shadows existing msg variables when autofixing EM101. flake8-simplify correctly ignores pre-initialization references in SIM113, while W391 fixes now handle consecutive empty cells in Jupyter notebooks. Updates to pyupgrade improve nested class matching in UP008 and string escape detection in UP012.

These changes sit alongside Ruff’s existing strengths: 10-100x faster performance than Flake8 or Black, over 800 built-in rules, automatic fixes, and native re-implementations of popular plugins. The tool maintains drop-in compatibility with pyproject.toml configurations, offers built-in caching, and supports Python 3.14.

Major projects including Apache Airflow, Pandas, SciPy, FastAPI and Hugging Face continue to rely on Ruff to consolidate Flake8, isort, pydocstyle, pyupgrade and autoflake into a single, fast interface. For teams managing large monorepos or frequent CI runs, each release like 0.15.9 reduces friction while preserving accuracy.

Astral’s steady development keeps the tool aligned with evolving Python standards and editor integrations.

Use Cases
  • CI engineers running fast linting on massive Python monorepos
  • FastAPI developers replacing multiple tools with single Ruff command
  • Pandas contributors automatically correcting code with Ruff fixes
Similar Projects
  • Flake8 - slower linter whose plugins Ruff reimplements natively
  • Black - opinionated formatter now matched by Ruff's faster rules
  • isort - import sorter integrated into Ruff's unified interface

Protocol Buffers 34.1 Adds Bazel 9 Support 🔗

Latest release improves build compatibility and refines C++ Java Python implementations

protocolbuffers/protobuf · C++ · 71k stars Est. 2014

Protocol Buffers version 34.1 has shipped with targeted updates that address modern build requirements and language-specific issues. The most significant change is official **support for Bazel 9.

Protocol Buffers version 34.1 has shipped with targeted updates that address modern build requirements and language-specific issues. The most significant change is official support for Bazel 9.x, including relocation of the protocopt flag outside the C++ directory to reflect its language-agnostic role. Bazel users can now declare the dependency cleanly in MODULE.bazel files using Bzlmod.

C++ builds receive updated CMake dependencies and a new cc_proto_library target for MessageSet types in the bridge directory. The Java implementation fixes a parsing edge case in JsonFormat by avoiding toBigIntegerExact when handling large numeric exponents. Python bindings have been aligned with the same Bazel 9 requirements. Minor release script fixes round out the changes.

More than a decade after its introduction, protobuf remains the standard for schema-driven serialization of structured data. The protoc compiler generates compact, forward- and backward-compatible binary formats from .proto definitions, enabling efficient RPC calls and persistent storage across disparate systems. Teams are advised to pin to release commits rather than main-branch HEAD to maintain build stability.

These incremental improvements matter as organizations modernize their CI infrastructure and scale polyglot services that exchange millions of messages daily.

Use Cases
  • Backend engineers serialize data for cross-language microservice RPC calls
  • Mobile developers transmit compact messages between clients and servers
  • Data platform teams define schemas for storage and analytics pipelines
Similar Projects
  • Apache Thrift - similar serialization with stronger built-in RPC framework
  • FlatBuffers - zero-copy access model optimized for gaming performance
  • Cap'n Proto - focuses on even faster serialization and RPC throughput

Quick Hits

rust Rust empowers developers to build reliable, memory-safe software at blazing speeds using its unique ownership model that catches bugs at compile time. 111.7k
ClickHouse ClickHouse delivers ultra-fast real-time analytics on massive datasets, making complex SQL queries fly for big data applications. 46.7k
gitea Gitea provides a painless self-hosted all-in-one dev platform with Git hosting, code review, CI/CD, and package registry in one lightweight service. 54.7k
bun Bun combines a lightning-fast JavaScript runtime, bundler, test runner, and package manager into one seamless tool for modern JS development. 88.8k
go-ethereum Go-ethereum lets builders run Ethereum nodes, deploy smart contracts, and build decentralized apps with the official Go implementation of the protocol. 51k

OpenWeedLocator v3 Brings AI Detection and Wireless Control 🔗

Major update adds YOLO models, web dashboards and GPS tracking to the established Raspberry Pi weed-spraying platform.

geezacoleman/OpenWeedLocator · Python · 449 stars Est. 2021 · Latest: v3.0.0

Four and a half years after its first release, OpenWeedLocator has shipped version 3.0.0, a substantial upgrade that moves the project from basic green-on-brown detection into practical in-crop weed management.

Four and a half years after its first release, OpenWeedLocator has shipped version 3.0.0, a substantial upgrade that moves the project from basic green-on-brown detection into practical in-crop weed management.

The core hardware remains deliberately accessible: a Raspberry Pi, camera, relay-controlled solenoids and 3D-printed mounts. What has changed is the software stack. The detection pipeline has been rewritten for better performance and now runs under systemd for reliable startup and automatic recovery after power interruptions — a welcome change for equipment that lives on bouncy tractors.

The headline features are the new Web Dashboard Controllers. Two modes are provided. The Standalone Controller runs directly on the Pi and gives a single unit a touch-optimised interface designed for gloved hands. The Networked Controller turns a laptop or tablet in the tractor cab into a central hub that can configure and monitor multiple OWL units simultaneously. Both dashboards expose live camera feeds, threshold sliders, GPS status and an on-screen numpad so operators never need a physical keyboard.

For in-crop (green-on-green) scenarios, the release adds official support for Ultralytics YOLO models. Both PyTorch .pt and the much faster NCNN .zip formats are accepted, with the latter delivering 10–25 frames per second on Raspberry Pi hardware depending on resolution. A hybrid detection mode combines YOLO crop masking with classical green detection, letting users fall back to reliable colour-based spotting when the AI is uncertain. Over-the-air model deployment means new networks can be pushed to units in the field without reflashing SD cards.

GPS integration and real-time weed tracking complete the picture, recording spray events with location data for mapping and later analysis. The project continues to publish its original Scientific Reports paper (Coleman et al., 2022) for researchers who need to cite the system.

Installation follows the same straightforward path that made OWL popular with the maker community. On Raspberry Pi OS Bookworm the commands remain:

git clone https://github.com/geezacoleman/OpenWeedLocator owl
bash owl/owl_setup.sh
workon owl
python owl.py

Documentation has been updated to cover the new AI configuration tabs, dashboard networking and hybrid mode tuning. The community forum at community.openweedlocator.org remains active for hardware troubleshooting and model sharing.

For builders working at the intersection of agriculture and embedded systems, v3.0.0 transforms OpenWeedLocator from an interesting prototype into a flexible platform that can be deployed, monitored and improved without proprietary tools or expensive hardware.

**

Use Cases
  • Farmers spot-spraying weeds in row crops with Raspberry Pi
  • Researchers deploying YOLO models on autonomous farm robots
  • DIY builders assembling low-cost sprayers from 3D printed parts
Similar Projects
  • open-ag/WeedVision - delivers camera detection but lacks OWL's integrated solenoid control and web dashboards
  • FarmBot - offers automated weeding within a full CNC farming system rather than a bolt-on spot sprayer
  • agxai/CropMask - focuses on segmentation models without the complete hardware assembly and GPS tracking pipeline

More Stories

RF Swift Adds Lima VM Controls for macOS 🔗

Version 2.2.1 simplifies QEMU instance management without raw limactl commands

PentHertz/RF-Swift · Go · 301 stars Est. 2024

RF Swift's v2.2.1 release introduces dedicated Lima VM lifecycle commands, giving macOS users direct control over the underlying QEMU virtual machine.

RF Swift's v2.2.1 release introduces dedicated Lima VM lifecycle commands, giving macOS users direct control over the underlying QEMU virtual machine. The new rfswift engine lima status subcommand displays instance state, the path to ~/.lima/rfswift/lima.yaml, QMP socket location for USB passthrough, and Docker socket details. The companion rfswift engine lima reconfig command applies updated YAML templates without data loss by stopping, reconfiguring and restarting the VM; a --force flag enables destructive recreation when required.

The Go-based toolbox deploys containerized radio-frequency and security tools on Linux, Windows and macOS without modifying the host operating system. It supports x86_64, ARM64 (Raspberry Pi and Apple Silicon) and RISC-V architectures. Users select only the tools needed—software-defined radio utilities, wireless protocol analyzers and hardware security utilities—through simple YAML recipes.

Compared with dedicated RF distributions, the container approach consumes far less disk space, enables tool isolation, and supports rapid updates. Session recording is built in for audit documentation. The latest changes make RF Swift more practical for professionals who must maintain both security testing and everyday productivity environments on the same machine.

**

Use Cases
  • Wireless pentesters auditing protocols on primary macOS laptops
  • Hardware researchers testing SDR tools on Raspberry Pi devices
  • Telecom engineers running RF analysis on Windows workstations
Similar Projects
  • Kali Linux - requires dedicated OS install unlike container isolation
  • SDR Docker images - basic containers lacking unified lifecycle commands
  • GNU Radio - core framework without multi-platform deployment automation

NWinfo v1.6.1 Adds Mainboard and Chipset Detection 🔗

Latest release brings DDR5 parsing, GPU rebar flags and MCFG support to Windows hardware utility

a1ive/nwinfo · C · 516 stars Est. 2021

The latest release of nwinfo strengthens its position as a direct-access hardware diagnostic tool for Windows. Version 1.6.

The latest release of nwinfo strengthens its position as a direct-access hardware diagnostic tool for Windows. Version 1.6.1 introduces GUI mainboard information, chipset detection, and Tiger Lake-H MCHBAR support, allowing builders to inspect previously opaque platform details without WMI.

New capabilities include reporting DDR DRAM manufacturer, revision and stepping data, along with corrected DDR5 SKU parsing. The update adds GPU integrated graphics and rebar detection flags, expands PM table sizes for additional SMU versions, and includes an MCFG table parser. PCI attribute and subsystem parsing have been enhanced, while PNP IDs have been refreshed.

The utility continues to extract SMBIOS, CPUID, S.M.A.R.T., SPD, and EDID data through low-level mechanisms. Users can export results in JSON, YAML or HTML formats. The GUI, built with Nuklear, now displays mainboard details alongside human-readable sensor output. Network adapter reporting has been refactored to show transmit/receive link speeds and MTU values.

A warning accompanies the release: PawnIO can trigger BSOD on Windows 10 versions earlier than 2004. The project remains under the Unlicense and incorporates libcpuid, CrystalDiskInfo and other established components.

These changes keep the four-year-old utility current with modern platforms and memory technologies that many developers now deploy.

Use Cases
  • Hardware developers querying chipset and DDR5 details on Windows
  • System administrators exporting PCI and SMBIOS data as JSON
  • PC builders analyzing GPU rebar and mainboard information via GUI
Similar Projects
  • HWiNFO - commercial tool with broader real-time monitoring
  • CPU-Z - narrower focus limited mainly to processor data
  • OpenHardwareMonitor - sensor-oriented but lacks export options

Rezolus Fixes eBPF Stack Overflows on Newer Kernels 🔗

Version 5.8.2 adopts safer ring buffer methods to maintain kernel compatibility

iopsystems/rezolus · Rust · 253 stars Est. 2023

Rezolus version 5.8.2 addresses compatibility challenges with recent Linux kernels.

Rezolus version 5.8.2 addresses compatibility challenges with recent Linux kernels. The release updates all eBPF programs to use bpf_ringbuf_reserve and bpf_ringbuf_submit instead of bpf_ringbuf_output with stack allocations. This prevents stack overflows that occurred when exceeding the 512-byte limit on newer kernel versions.

The fix maintains Rezolus's ability to provide high-resolution, low-overhead systems telemetry. Using eBPF instrumentation, it captures metrics across CPU utilization, scheduler responsiveness, block I/O workloads, network dynamics, system call patterns and container performance.

Multiple operating modes extend its utility. The agent collects metrics in real time. An exporter makes them available via Prometheus-compatible endpoints. The recorder supports direct file output for specific analyses, while Hindsight maintains a rolling buffer for post-event examination.

Users can view recorded data through a local web dashboard with the Viewer component. Additionally, the MCP server facilitates AI-driven analysis, allowing models to query artifacts for insights like anomaly detection or metric correlations.

For teams managing large-scale infrastructure, the update preserves compatibility with evolving kernel versions. It reinforces Rezolus's role in providing actionable observability data for troubleshooting and optimization.

Use Cases
  • SRE teams monitoring kernel and container metrics at scale
  • Incident responders using Hindsight for post-event snapshots
  • Engineers running AI analysis on Parquet telemetry recordings
Similar Projects
  • Pixie - focuses on Kubernetes application flows rather than broad system metrics
  • ebpf_exporter - offers custom probes but lacks Rezolus's multi-mode architecture
  • bpftrace - supports ad-hoc scripting instead of continuous high-resolution collection

Quick Hits

TuyaOpen TuyaOpen delivers a next-gen AI+IoT framework for T2/T3/T5AI/ESP32 hardware, slashing integration time for smart AI agents. 1.5k
project_aura Project Aura builds an ESP32-S3 air-quality station with LVGL touchscreen UI, MQTT, and native Home Assistant integration. 542
awesome-iot Curated hub of IoT projects, libraries, and hardware resources that accelerates connected device exploration and building. 3.9k
espectre ESPectre turns Wi-Fi CSI spectrum analysis into sensor-free motion detection with direct Home Assistant integration. 7k
firmware Predatory ESP32 firmware equips devices with aggressive C++ capabilities for advanced wireless IoT control and exploitation. 5.3k

egui 0.34.1 Improves WebGL Support in Rust GUI Toolkit 🔗

Latest eframe release adds browser fallback rendering and tighter cursor control, reinforcing the immediate mode library's cross-platform reliability for developers.

emilk/egui · Rust · 28.6k stars Est. 2019 · Latest: 0.34.1

egui maintains its position as a practical choice for Rust developers who need to build user interfaces without the overhead of traditional GUI frameworks. The just-released version 0.34.

egui maintains its position as a practical choice for Rust developers who need to build user interfaces without the overhead of traditional GUI frameworks. The just-released version 0.34.1 focuses on web stability. The wgpu backend now enables a WebGL fallback, allowing the library to run in browsers that lack full WebGPU support. A second change restricts cursor styling to the <canvas> element alone, eliminating unintended effects on surrounding page elements.

The library's immediate mode design remains its defining feature. Instead of creating widgets once and updating them later, developers describe the complete interface on every frame. This approach eliminates complex state synchronization and reduces boilerplate. A typical snippet demonstrates the pattern:

ui.heading("My egui Application");
ui.horizontal(|ui| {
    ui.label("Your name: ");
    ui.text_edit_singleline(&mut name);
});
ui.add(egui::Slider::new(&mut age, 0..=120).text("age"));
if ui.button("Increment").clicked() {
    age += 1;
}

eframe, the official framework, handles the heavy lifting for deployment. It supports WebAssembly, Linux, macOS, Windows, and Android from the same codebase. Because egui only requires the ability to draw textured triangles, it integrates cleanly into custom game engines and rendering pipelines.

The library's emphasis on simplicity has made it a reliable tool for rapid prototyping. Game developers use it for in-engine debug menus and tools. Data visualization teams, including those at sponsor Rerun, rely on it for multimodal stream inspection. Web developers writing Rust benefit from a straightforward path to browser applications without learning JavaScript frameworks.

Immediate mode does have trade-offs. Rebuilding the UI every frame can increase CPU usage in extremely complex interfaces, though most practical applications remain fast. The project continues to prioritize ease of use and portability over feature bloat.

Version 0.34.1's changes may appear modest, yet they address real deployment friction. WebGL fallback expands the range of supported browsers, while the cursor fix improves integration with existing web pages. For teams shipping Rust tools to varied environments, these incremental improvements matter.

The demo at egui.rs lets developers test these capabilities directly in any modern browser. Source links within the demo provide concrete examples of integration patterns. Documentation and the examples directory offer quick starting points for both new interfaces and engine embeddings.

As Rust adoption grows in game development and web infrastructure, egui's combination of minimal API and broad platform support gives builders a pragmatic GUI option that scales from quick prototypes to production tools.

Use Cases
  • Game developers adding debug interfaces to Rust engines
  • Engineers building cross-platform data visualization apps
  • Teams creating web tools with Rust and Wasm
Similar Projects
  • imgui-rs - Rust bindings for the original C++ immediate mode GUI that inspired egui's design and simplicity.
  • iced - Retained-mode Rust GUI that uses an Elm-style message architecture instead of immediate mode.
  • dioxus - Declarative Rust UI library focused on web-like component models with multiple renderers.

More Stories

Magpie Upscaler Adds Automatic Cursor Hiding 🔗

Version 0.12.1 improves auto-scaling reliability and eliminates several compatibility bugs

Blinue/Magpie · HLSL · 13.5k stars Est. 2021

Magpie v0.12.1 introduces automatic cursor hiding for its real-time window upscaling engine on Windows 10 and 11.

Magpie v0.12.1 introduces automatic cursor hiding for its real-time window upscaling engine on Windows 10 and 11. Users can set a custom delay before the cursor disappears during scaled sessions, reducing visual clutter when applying shaders to games or applications.

The update refines the auto-scaling logic so pop-up dialogs no longer interrupt the process. The scaled window is now permanently kept on top, prompting the removal of the former “Keep scaled window on top” toggle after fixes resolved source-window layering conflicts.

Several stability issues were addressed. Scaling no longer terminates unexpectedly, title bars on certain applications are cropped correctly, and monochrome cursors no longer freeze the output. A toolbar screenshot menu ID conflict was also resolved.

The tool continues to ship with Anime4K, FSR, CRT shaders and other HLSL-based filters. It supports both fullscreen and windowed modes, multi-monitor configurations, and requires DirectX feature level 11. These changes focus on practical reliability for users running older software or games at high resolutions where native DPI handling remains inadequate.

The release reflects steady iteration on a mature codebase rather than flashy new features, prioritizing polish for daily use.

Use Cases
  • Gamers upscaling legacy titles on 4K monitors
  • Users sharpening older productivity apps at native resolution
  • Developers testing UI elements with custom scaling filters
Similar Projects
  • Lossless Scaling - commercial tool with similar real-time algorithms but paid licensing
  • ReShade - shader injector focused on games rather than system-wide window capture
  • Special K - broader modding framework that includes upscaling as one component

Bliss Shader Refines Dynamic Lighting for Minecraft 🔗

Release11 advances Chocapic v9 edit with voxel lighting and leak fixes

X0nk/Bliss-Shader · GLSL · 937 stars Est. 2023

Bliss Shader has released version 11, the latest stable update equivalent to its Modrinth and Curseforge builds. The GLSL project modifies Chocapic13’s v9 shader to produce varying lighting conditions that change with time, weather and location rather than remaining uniform.

Developer X0nk began by adding settings and breaking features, eventually imposing a distinct visual style.

Bliss Shader has released version 11, the latest stable update equivalent to its Modrinth and Curseforge builds. The GLSL project modifies Chocapic13’s v9 shader to produce varying lighting conditions that change with time, weather and location rather than remaining uniform.

Developer X0nk began by adding settings and breaking features, eventually imposing a distinct visual style. The shader now incorporates a voxel floodfill colored lighting system contributed by Null, a depth-of-field overhaul by WoMspace, and light-leak corrections developed with help from Emin and Gri573. Ideas from RRe36 and Sixthsurge also shaped several effects.

The repository maintains three branches. Release versions appear when changes are substantial and stable enough for public distribution. The Stable branch receives regular tested updates, while the Unstable branch contains the newest work, often with bugs. Users download the stable version by selecting the green “Code” button and choosing “Download ZIP”; the archive installs directly without extraction. The Unstable branch is accessed via the branch switcher.

For players running recent Minecraft versions, Bliss offers concrete control over scene appearance through extensive in-game settings. Its continued maintenance into 2026 demonstrates how community-driven shader development sustains visual innovation long after initial release.

(178 words)

Use Cases
  • Survival players adjusting variable lighting across biomes and times
  • Content creators using enhanced DOF for cinematic Minecraft footage
  • Server operators deploying custom colored lighting for community worlds
Similar Projects
  • Chocapic13 - original base shader that Bliss extensively modifies
  • Complementary Shaders - high-performance alternative with different visual priorities
  • BSL Shaders - realistic lighting pack emphasizing vanilla-compatible effects

Raylib 5.5 Adds One-Click WebAssembly Builds 🔗

New Windows package enables single-click C to web compilation alongside expanded API

raysan5/raylib · C · 31.9k stars Est. 2013

Raylib 5.5 brings a notable improvement in accessibility for cross-platform development. The updated pre-configured package for Windows enables programmers to export their C projects to WebAssembly with one mouse click.

Raylib 5.5 brings a notable improvement in accessibility for cross-platform development. The updated pre-configured package for Windows enables programmers to export their C projects to WebAssembly with one mouse click. This feature greatly simplifies the process of creating browser-based games and interactive applications.

Since the last major release, the project has seen 800 commits that close 270 issues. The API has grown by 30 new functions to reach a total of 580, with 110 more receiving updates and fixes. A total of 140 new contributors joined, highlighting the project's expanding community.

The library's core design remains focused on simplicity. Written in plain C, it requires no external dependencies and supports a wide range of platforms from desktop to embedded devices and the web. Its OpenGL abstraction layer allows for consistent rendering across different versions and even provides a software fallback.

Full 3D capabilities include model loading with skeletal animation, PBR materials and post-processing effects. The raymath module offers comprehensive vector, matrix and quaternion operations while the audio system supports streaming for popular formats.

These changes make raylib particularly suitable for rapid prototyping and educational purposes in 2025's diverse computing landscape.

Use Cases
  • Novice developers prototyping videogames for web browsers in plain C
  • Educators teaching videogame development to students on multiple platforms
  • Hardware engineers creating visuals for Raspberry Pi and embedded systems
Similar Projects
  • SDL - lower-level C library without integrated 3D model support
  • GLFW - windowing library requiring additional components for full games
  • Allegro - similar game library but with less emphasis on web deployment

Quick Hits

GDevelop Build cross-platform 2D, 3D, and multiplayer games with this open-source engine designed for creators of all skill levels. 21.8k
Luma-Framework Mod DX11 games with this ReShade-based framework that overhauls post-processing for HDR, DLSS, and advanced rendering in Prey and beyond. 290
gozen Cut and edit videos with this minimalistic Godot-powered editor built for speed and simplicity. 369
comedot Accelerate 2D game development with this component-based Godot framework and template for modular, reusable code. 451
Shrimple Enhance Minecraft Java with subtle shadows and colored lighting while preserving vanilla aesthetics using this lightweight shader. 174