Preset
Background
Text
Font
Size
Width
Account Tuesday, March 31, 2026

The Git Times

“The medium is the message.” — Marshall McLuhan

AI Models
Claude Sonnet 4.6 $15/M GPT-5.4 $15/M Gemini 3.1 Pro $12/M Grok 4.20 $6/M DeepSeek V3.2 $0.89/M Llama 4 Maverick $0.60/M
Full Markets →

Codex Plugin Brings OpenAI Reviews to Claude Code 🔗

Enables seamless use of Codex commands in existing Claude Code sessions for code analysis

openai/codex-plugin-cc · JavaScript · 3.5k stars 0d old · Latest: v1.0.1

OpenAI has released a plugin that integrates its Codex code analysis tool directly into Claude Code. The openai/codex-plugin-cc project lets developers run code reviews and delegate tasks using OpenAI's models without switching from their existing Claude Code workflow.

Installation uses Claude Code's built-in marketplace.

OpenAI has released a plugin that integrates its Codex code analysis tool directly into Claude Code. The openai/codex-plugin-cc project lets developers run code reviews and delegate tasks using OpenAI's models without switching from their existing Claude Code workflow.

Installation uses Claude Code's built-in marketplace. Users add the repository with /plugin marketplace add openai/codex-plugin-cc, install via /plugin install codex@openai-codex, then run /codex:setup. The command checks for the Codex CLI, offers to install it with npm install -g @openai/codex if needed, and handles login.

Once active, the plugin adds several slash commands. /codex:review performs standard read-only reviews of uncommitted changes or branches specified with --base main. /codex:adversarial-review provides steerable challenge reviews. Background job commands—/codex:rescue, /codex:status, /codex:result, and /codex:cancel—manage longer-running tasks. A codex:codex-rescue subagent appears in the agents list.

Multi-file reviews can take considerable time, so the plugin supports --background and --wait flags. It requires Node.js 18.18 or later plus either a ChatGPT subscription or OpenAI API key. Usage counts against Codex limits.

Version 1.0.1 fixes Windows .cmd shim errors and improves setup documentation.

Use Cases
  • Software developers conducting Codex code reviews in Claude Code
  • Engineers delegating background review tasks to OpenAI models
  • Teams managing status and results of long-running Codex jobs
Similar Projects
  • anthropic/claude-plugins - provides native extensions but lacks OpenAI Codex access
  • openai/codex-cli - supplies the core tool without Claude Code integration
  • github/copilot-chat - integrates AI reviews in different interfaces and models

More Stories

Auto-Dream Brings Sleep Cycles to OpenClaw Agents 🔗

Neuroscience-based system consolidates memories across five layers for persistent AI

LeoYeAI/openclaw-auto-dream · HTML · 498 stars 3d old

Auto-Dream adds automatic memory consolidation to OpenClaw agents, giving them the equivalent of sleep. The project implements five distinct memory layers, importance scoring, forgetting curves, knowledge graphs and health dashboards. Rather than treating memory as file storage, it applies cognitive principles to decide what to retain, what to weaken and how to connect related information.

Auto-Dream adds automatic memory consolidation to OpenClaw agents, giving them the equivalent of sleep. The project implements five distinct memory layers, importance scoring, forgetting curves, knowledge graphs and health dashboards. Rather than treating memory as file storage, it applies cognitive principles to decide what to retain, what to weaken and how to connect related information.

Agents process accumulated experiences during scheduled dream cycles. Important memories are reinforced while less relevant details follow natural decay. The system maintains a persistent knowledge graph that links decisions, conversations and observations across time.

Version 4.0.0 introduced several functional improvements. When no new content exists, the Skip-with-Recall mechanism surfaces an old memory with streak count instead of an empty notification. Growth metrics now show exact entry increases such as "142 → 145 entries (+2.1%)". Stale thread detection scans for items untouched more than 14 days and highlights the top three. Weekly summaries run every Sunday, and the dashboard.html regenerates automatically after each cycle.

The tool matters because MyClaw.ai runs OpenClaw agents on dedicated Linux servers with persistent files, cron jobs and unrestricted internet access. Without consolidation, these always-on agents lose context despite having long-term storage. Auto-Dream addresses that gap directly.

(178 words)

Use Cases
  • Developers running 24/7 OpenClaw agents consolidate nightly memories
  • Professionals tracking decisions across weeks with persistent AI assistants
  • Researchers building evolving knowledge bases through simulated dream cycles
Similar Projects
  • MemGPT - provides layered memory management but lacks forgetting curves
  • LangChain - offers basic conversation buffers without automated consolidation
  • NeuralMemory - builds knowledge graphs but without dream-cycle processing

Context7 Patch Refines AI Agent Setup Process 🔗

Version 0.3.9 adds agent re-selection and fixes TOML config handling

upstash/context7 · TypeScript · 51.2k stars Est. 2025

Context7 has shipped ctx7@0.3.9, a modest but practical patch that improves daily usability for developers embedding current library documentation into LLM workflows.

Context7 has shipped ctx7@0.3.9, a modest but practical patch that improves daily usability for developers embedding current library documentation into LLM workflows.

The update lets users re-select already configured agents during npx ctx7 setup and overwrites existing MCP configuration entries instead of silently skipping them. It also corrects TOML replacement logic so sub-sections are handled accurately and whitespace drift no longer appears on repeated runs.

These changes matter because AI coding tools are now standard in many workflows. Context7 solves a persistent problem: LLMs working from outdated training data. It fetches version-specific documentation and code examples directly from source libraries and injects them into prompts, either through the use context7 directive or via native MCP tool calls.

The platform operates in two modes. CLI + Skills installs a guidance layer that directs agents to use ctx7 commands. MCP mode registers the server at https://mcp.context7.com/mcp, passing the CONTEXT7_API_KEY header for authenticated access. An optional free API key from context7.com increases rate limits.

With the latest refinements, repeated configuration becomes reliable, reducing friction for teams that frequently adjust their Cursor, Claude or OpenCode setups. The result is fewer hallucinated APIs and more accurate generated code when working with frameworks such as Next.js, Supabase or Cloudflare.

**

Use Cases
  • AI agents building Next.js middleware with current JWT patterns
  • Developers configuring Cloudflare Workers for timed JSON caching
  • Programmers implementing latest Supabase email authentication flows
Similar Projects
  • continue - supplies local codebase context rather than live library docs
  • aider - focuses on git-aware file editing without dynamic doc injection
  • mcp-registry tools - provide protocol servers but lack version-specific documentation

Hindsight Delivers Learning Memory for AI Agents 🔗

Version 0.4.21 adds LiteLLM support and Strands Agents integration

vectorize-io/hindsight · Python · 6.7k stars 5mo old

Hindsight has released version 0.4.21, bringing practical improvements to its agent memory system that enables learning over time rather than simple conversation recall.

Hindsight has released version 0.4.21, bringing practical improvements to its agent memory system that enables learning over time rather than simple conversation recall. The update removes hardcoded default models, adds a LiteLLM provider for Amazon Bedrock and more than 100 other LLMs, and introduces integration with the Strands Agents SDK.

The system continues to outperform alternatives on the LongMemEval benchmark, with results independently verified by researchers at Virginia Tech's Sanghani Center and The Washington Post. It addresses limitations of RAG and knowledge graph approaches by focusing on memory that actually improves agent behavior across extended interactions.

Developers can add Hindsight using the LLM Wrapper, which requires just two lines of code to replace an existing LLM client. Memories are then stored and retrieved automatically during calls. For precise control, the SDK or HTTP API allows explicit management of storage and recall operations.

Additional changes include programmatic control plane UI management, metadata handling fixes, async client improvements to prevent event loop deadlocks, and a security update that excludes the compromised litellm 1.82.8 package. The release also contains updated documentation and a guide for incorporating long-term memory into LangGraph and LangChain agents.

Hindsight is already running in production at several Fortune 500 enterprises and multiple AI startups.

Use Cases
  • Fortune 500 engineers implementing memory in production AI agents
  • Developers adding LiteLLM support to Bedrock-based agent systems
  • Teams integrating memory tools with LangGraph and Strands agents
Similar Projects
  • mem0 - simpler vector retrieval without Hindsight's learning focus
  • LangChain Memory - basic buffers that lack LongMemEval performance
  • Zep - embedding-based memory with narrower long-term capabilities

Drop-In File Cuts Claude Output Tokens by 63% 🔗

CLAUDE.md automatically reduces AI verbosity with no code changes required

drona23/claude-token-efficient · Unknown · 806 stars 0d old

Developers can reduce Claude's output tokens by 63 percent by simply adding a CLAUDE.md file to their project root. The drona23/claude-token-efficient repository offers this drop-in solution that requires no code changes or prompt modifications.

Developers can reduce Claude's output tokens by 63 percent by simply adding a CLAUDE.md file to their project root. The drona23/claude-token-efficient repository offers this drop-in solution that requires no code changes or prompt modifications.

Claude Code automatically reads the file, which contains instructions to eliminate common sources of verbosity. The model no longer opens with pleasantries, restates questions, or concludes with standard polite phrases.

The file targets these specific issues:

  • Unnecessary introductory affirmations
  • Query restatements before providing answers
  • Polite but token-wasting closings
  • Special formatting characters and Unicode symbols
  • Unsolicited suggestions and over-engineering

Most Claude costs come from input tokens, but this tool addresses the annoying aspects of output behavior. The added input tokens from the file itself are offset when output volume is high.

The 63 percent figure comes from informal testing on five prompts without statistical controls. Results on non-Claude models remain untested, though the instructions are model-agnostic.

For development teams, the project provides an effortless way to obtain cleaner, more concise AI responses that are easier to parse and less costly to generate.

Use Cases
  • Backend developers reducing output costs in Claude projects
  • AI teams generating concise code without prompt changes
  • Engineers obtaining parseable responses from Claude workflows
Similar Projects
  • system-prompt-hub - requires manual prompt inclusion each time
  • token-saver-ai - focuses on input reduction rather than output style
  • concise-claude - needs code integration unlike automatic file loading

Claude Skill Automates App Store Screenshots 🔗

Analyzes iOS codebases to pair benefits with simulator images automatically

A Claude Code skill extracts core benefits directly from an iOS app’s codebase and turns them into polished App Store screenshots. The project identifies three to five key features that drive downloads, evaluates existing simulator screenshots, rates their suitability, and matches each with the most relevant benefit.

Generation follows a controlled two-stage pipeline.

A Claude Code skill extracts core benefits directly from an iOS app’s codebase and turns them into polished App Store screenshots. The project identifies three to five key features that drive downloads, evaluates existing simulator screenshots, rates their suitability, and matches each with the most relevant benefit.

Generation follows a controlled two-stage pipeline. First compose.py builds a deterministic scaffold that positions headline text, applies the device frame, and composites the original simulator image with pixel-perfect accuracy. The scaffold then passes to Nano Banana Pro via Gemini MCP for AI enhancement that preserves layout while improving visual quality.

Installation requires adding the skill through claude install-skill, installing Pillow, and placing Apple’s SF Pro Display Black font at the expected system path. The Gemini MCP server must be configured in Claude Code’s settings before use. Running /aso-appstore-screenshots inside a project directory launches an interactive session that saves progress in Claude’s memory system, allowing developers to resume across conversations.

The approach avoids the inconsistency of pure prompt-based generation by grounding output in actual code analysis and structured composition. A final preview image displays all screenshots side-by-side for quick review.

**

Use Cases
  • iOS developers extracting benefits for App Store visuals
  • Indie makers generating polished promotional screenshot sets
  • Teams automating ASO asset creation from codebases
Similar Projects
  • fastlane/snapshot - Captures simulator screenshots but requires manual marketing composition
  • app-store-mockup - Supplies device templates without codebase analysis or AI pairing
  • screenshot-ai-tools - Uses pure prompt generation lacking deterministic scaffolding stage

Open Source AI Agents Evolve Memory, Skills and Orchestration Layers 🔗

Community projects are creating modular infrastructure that gives autonomous agents persistent context, collaborative intelligence, and specialized capabilities across diverse environments.

An emerging pattern in open source reveals that the focus has shifted from running large language models to building the full operational stack required for reliable AI agents. Developers are treating agents as complete software entities that need memory systems, skill libraries, orchestration engines, and secure execution environments.

This cluster demonstrates a maturing agent infrastructure layer.

An emerging pattern in open source reveals that the focus has shifted from running large language models to building the full operational stack required for reliable AI agents. Developers are treating agents as complete software entities that need memory systems, skill libraries, orchestration engines, and secure execution environments.

This cluster demonstrates a maturing agent infrastructure layer. Projects like vectorize-io/hindsight and LeoYeAI/openclaw-auto-dream tackle the core problem of memory. Rather than stateless conversations, these tools create learning memory systems and automatic consolidation cycles—essentially “sleep” for agents that compress and reinject relevant context. Similarly, thedotmack/claude-mem and thedotmack/claude-mem automatically capture, compress, and recall coding session history.

Orchestration and collaboration represent another major axis. rivet-dev/agent-os delivers a portable operating system for agents powered by WebAssembly and V8 isolates, offering six-millisecond cold starts at a fraction of traditional sandbox costs. bfly123/claude_code_bridge enables real-time multi-AI collaboration between Claude, Codex, and Gemini with persistent context, while ruvnet/ruflo and Yeachan-Heo/oh-my-claudecode provide sophisticated multi-agent swarm coordination specifically for Claude Code.

The pattern extends to specialized domains and tooling. TauricResearch/TradingAgents and hsliuping/TradingAgents-CN demonstrate multi-agent financial trading frameworks. karpathy/autoresearch shows agents autonomously conducting research on single-GPU systems. Browser and communication integrations appear in vercel-labs/agent-browser, Panniantong/Agent-Reach, and WecomTeam/wecom-cli, which lets both humans and agents operate enterprise platforms through the terminal.

Framework-level work such as mastra-ai/mastra (from the Gatsby team), agentscope-ai/agentscope, and langchain-ai/deepagents provides structured environments for building visible, understandable, and trustworthy agents. Local-first execution is addressed by mudler/LocalAI, enabling any modality—text, vision, voice—on consumer hardware without GPUs.

Collectively, these repositories signal that open source is constructing a composable agent stack. Just as the web ecosystem produced standardized components for routing, state management, and UI, the AI agent space is developing standardized memory, skills (with registries exceeding 5,400 entries), security boundaries, and orchestration primitives. The movement emphasizes modularity over monolithic systems, allowing developers to mix memory solutions, skill collections, and execution runtimes according to their needs.

This infrastructure focus suggests open source is moving toward a future where sophisticated autonomous agents become as accessible and customizable as today’s web applications, dramatically lowering the cost of building reliable agentic systems.

Use Cases
  • Developers orchestrating multi-agent coding workflows
  • Researchers running autonomous scientific experiments
  • Analysts deploying specialized financial trading agents
Similar Projects
  • LangGraph - Offers graph-based agent workflows but lacks the specialized memory consolidation and Claude-centric skill ecosystems
  • CrewAI - Focuses on role-based multi-agent teams yet doesn't provide the low-level OS runtime or browser automation layers
  • Auto-GPT - Pioneered autonomous goal pursuit but without the persistent learning memory systems and 5000+ skill registries now emerging

Open Source Builds Modular Tooling for LLM Agents and Efficiency 🔗

From quantization kernels to agent operating systems and multi-model bridges, new projects are creating composable infrastructure for practical LLM deployment.

Open source is entering a new phase of LLM maturity, marked by the rapid creation of specialized tools that turn raw models into efficient, agentic systems. Rather than focusing on foundational models, this wave of llm-tools emphasizes operational capabilities: optimized inference, memory management, agent orchestration, secure execution, and domain-specific integrations.

A clear technical pattern emerges across the cluster.

Open source is entering a new phase of LLM maturity, marked by the rapid creation of specialized tools that turn raw models into efficient, agentic systems. Rather than focusing on foundational models, this wave of llm-tools emphasizes operational capabilities: optimized inference, memory management, agent orchestration, secure execution, and domain-specific integrations.

A clear technical pattern emerges across the cluster. Inference efficiency receives heavy attention. 0xSero/turboquant demonstrates near-optimal KV cache quantization using 3-bit keys and 2-bit values, implemented with Triton kernels and vLLM integration. huggingface/text-embeddings-inference delivers high-performance embedding serving in Rust, while mudler/LocalAI enables running LLMs, vision, and multimodal models on commodity hardware without GPUs. These projects show the community prioritizing memory reduction and hardware accessibility at the systems level.

Agent infrastructure forms another pillar. rivet-dev/agent-os offers a portable operating system for agents built on WebAssembly and V8 isolates, achieving ~6 ms coldstarts at a fraction of sandbox costs. ruvnet/ruflo provides enterprise-grade orchestration for Claude-based multi-agent swarms with distributed intelligence and RAG. mastra-ai/mastra and dust-tt/dust deliver modern TypeScript frameworks for building AI agents, while rocketride-org/rocketride-server supplies a high-performance C++ pipeline engine with 50+ extensible nodes supporting numerous model providers and vector databases.

Tooling for collaboration and extensibility is equally prominent. bfly123/claude_code_bridge enables real-time multi-AI collaboration across Claude, Codex, and Gemini with persistent context and minimal token overhead. The proliferation of skills and plugins, seen in hesreallyhim/awesome-claude-code, sickn33/antigravity-awesome-skills, and alirezarezvani/claude-skills, provides hundreds of reusable components for coding, research, and operational tasks. Memory-focused innovations like LeoYeAI/openclaw-auto-dream introduce automatic consolidation cycles reminiscent of biological sleep.

Domain applications further illustrate the trend. TauricResearch/TradingAgents and ZhuLinsen/daily_stock_analysis deploy multi-agent LLM systems for financial trading and real-time market intelligence. upstash/context7 generates up-to-date code documentation specifically for LLMs and AI editors.

Collectively, these projects signal that open source is heading toward a composable, interoperable AI stack. The emphasis on low-overhead bridges, secure containerized agents (nanoclaw), small-model training (jingyaogong/minimind), and cross-provider compatibility (BerriAI/litellm) suggests developers will soon assemble sophisticated LLM systems with the same modularity they expect from modern web or cloud infrastructure. This tooling layer is making agentic AI practical, portable, and domain-adaptable.

Use Cases
  • Engineers optimizing LLM inference on consumer hardware
  • Developers orchestrating secure multi-agent collaboration workflows
  • Analysts building LLM-powered financial trading systems
Similar Projects
  • LangChain - Provides higher-level agent abstractions while this cluster focuses on low-level inference kernels and OS primitives
  • AutoGen - Emphasizes conversational multi-agent patterns but lacks the specialized Claude skills and quantization focus seen here
  • Ollama - Centers on local model serving similar to LocalAI but offers less agent orchestration and domain plugin tooling

Open Source Builds Agent-Native Tooling for AI-Driven Development 🔗

CLI bridges, browser APIs, and local execution engines are giving autonomous agents first-class access to terminals, codebases, and the internet

An emerging pattern in open source dev-tools is the rapid creation of infrastructure that treats AI agents as primary users rather than auxiliary assistants. Rather than simply wrapping existing LLMs with chat interfaces, these projects focus on giving agents persistent context, direct system access, and standardized control surfaces across terminals, browsers, and IDEs.

The technical emphasis is unmistakable.

An emerging pattern in open source dev-tools is the rapid creation of infrastructure that treats AI agents as primary users rather than auxiliary assistants. Rather than simply wrapping existing LLMs with chat interfaces, these projects focus on giving agents persistent context, direct system access, and standardized control surfaces across terminals, browsers, and IDEs.

The technical emphasis is unmistakable. Multiple repositories demonstrate CLI-first design that minimizes token overhead while maintaining stateful sessions. bfly123/claude_code_bridge enables real-time collaboration between Claude, Gemini, and other models with shared context, while Panniantong/Agent-Reach provides zero-cost internet access by turning social platforms and code repositories into queryable surfaces through a single terminal command.

Browser control has become a critical capability. Both vercel-labs/agent-browser and epiral/bb-browser expose Chrome instances as programmable APIs that preserve login state and cookies, effectively giving agents authenticated web access. This is complemented by ChromeDevTools/chrome-devtools-mcp, which surfaces debugging and inspection primitives directly to coding agents.

Local execution and model independence form another pillar of the trend. mudler/LocalAI allows any modality—text, vision, audio—to run on commodity hardware without GPUs, while rocketride-org/rocketride-server delivers a high-performance C++ pipeline with over 50 extensible nodes and support for 13 model providers. These projects prioritize developer control and deployment flexibility through Docker, SDKs, and IDE extensions.

Agent extensibility is being addressed through skill systems. alirezarezvani/claude-skills offers nearly 200 specialized plugins, kepano/obsidian-skills teaches agents to manipulate Markdown, JSON Canvas, and CLI tools within Obsidian, and teng-lin/notebooklm-py unlocks programmatic access to NotebookLM capabilities. Meanwhile, coder/mux creates isolated, parallel environments specifically for agentic development, preventing cross-contamination between AI workflows.

Collectively, these projects signal that open source is moving beyond human-centric tooling toward agent-native infrastructure. The terminal is becoming an agent command center, browsers are turning into execution backends, and development environments are being re-architected around persistent, autonomous actors. This cluster reveals a clear technical direction: standardized interfaces for agent observation, action, and memory that will likely define the next decade of software development.

The pattern suggests open source is preparing for a future where significant portions of coding, research, and system operations are performed by autonomous agents operating with the same fluency humans expect from their own tools.

Use Cases
  • Developers controlling AI agents through terminal commands with persistent context
  • Researchers giving agents browser access for real-time web data collection
  • Teams building isolated parallel environments for multi-agent code development
Similar Projects
  • Aider - CLI-based AI pair programmer that similarly bridges LLMs with git workflows but focuses on conversational editing
  • OpenDevin - Open platform for AI software engineers that provides comparable agent sandboxing and tool use patterns
  • Continue.dev - IDE-native AI coding assistant that extends agents within editors rather than through standalone CLIs

Quick Hits

cherry-studio AI productivity studio with smart chat, autonomous agents, and 300+ assistants plus unified access to frontier LLMs. 42.6k
Polymarket-copy-trading-bot Polymarket copy-trading bot that automatically mirrors top traders' positions across prediction markets. 482
thereisnospoon First-principles ML primer that teaches engineers to reason about machine learning systems like regular software. 385
oh-my-claudecode Teams-first Multi-agent orchestration for Claude Code 17.9k

OpenBB ODP Unifies Financial Data for Quants and AI Agents 🔗

Latest v1.0.1 stable release reinforces "connect once, consume everywhere" architecture across Python, Workspace and MCP servers

OpenBB-finance/OpenBB · Python · 64.7k stars Est. 2020 · Latest: ODP

OpenBB's Open Data Platform has matured into the infrastructure layer that financial data engineers have been seeking. With the recent v1.0.

OpenBB's Open Data Platform has matured into the infrastructure layer that financial data engineers have been seeking. With the recent v1.0.1 stable release, the project solidifies its role as the single integration point that feeds multiple consumption surfaces without duplicated pipelines.

The platform solves a persistent problem: proprietary, licensed and public data sources must be cleaned, normalized and delivered to quants running Python models, analysts working in Excel or OpenBB Workspace, AI agents connected via MCP servers, and applications consuming REST APIs. ODP's "connect once, consume everywhere" design addresses this by acting as the central consolidation layer.

Getting started remains deliberately simple. After pip install openbb, developers can pull data with just a few lines:

from openbb import obb
output = obb.equity.price.historical("AAPL")
df = output.to_dataframe()

The same backend can serve an entire organization. Installing openbb[all] and running openbb-api launches a FastAPI server via Uvicorn at 127.0.0.1:6900. Analysts then connect this backend through the Workspace "Apps" tab by adding the local URL, instantly exposing the same datasets to visual dashboards and AI copilots.

This release marks the stabilization of the Open Data Platform as distinct from the original OpenBB Terminal vision. The architecture now explicitly prioritizes data engineers building integrations that simultaneously support Python environments for quantitative research, enterprise UI for analysts, and MCP servers that AI agents use to query live market data.

The breadth of coverage matches the needs of modern financial workflows. The platform handles equities, options, derivatives, fixed-income, crypto, economics, and quantitative models. Machine learning practitioners can access clean datasets without building their own scrapers or managing multiple vendor APIs.

For teams deploying AI agents in finance, the value is immediate. Rather than giving agents direct access to disparate data sources with inconsistent schemas, developers point them at ODP's unified interface. The same holds for research dashboards that need both real-time and historical data without managing separate connections.

The v1.0.1 release, which auto-updaters now reference via latest.json, indicates the project has moved beyond experimentation into production infrastructure. Data engineers integrating licensed feeds or internal databases can now expose them consistently across every tool their organization uses.

**

Use Cases
  • Quants pulling equity and options data in Python scripts
  • AI agents querying unified market data via MCP servers
  • Analysts connecting datasets to OpenBB Workspace dashboards
Similar Projects
  • yfinance - delivers convenient Yahoo Finance access but lacks OpenBB's multi-surface architecture for AI agents and enterprise tools
  • ccxt - specializes in crypto exchange data without covering equities, fixed income or the unified Python-to-Workspace integration
  • pandas-datareader - provides basic data import capabilities but offers none of the financial domain modeling or AI copilot infrastructure

More Stories

Spec Kit 0.4.3 Standardizes AI Skill Naming 🔗

Update unifies Kimi and Codex conventions while adding PowerShell 5.1 compatibility

github/spec-kit · Python · 83.9k stars 7mo old

Spec Kit released version 0.4.3 this week, focusing on consistency and platform reliability for its Spec-Driven Development toolkit.

Spec Kit released version 0.4.3 this week, focusing on consistency and platform reliability for its Spec-Driven Development toolkit.

The update migrates legacy dotted Kimi directories and unifies skill naming with Codex, as implemented in pull request #1971. This reduces maintenance overhead for community extensions and presets that rely on predictable directory structures when working with different AI agents.

A separate change fixes PowerShell 5.1 compatibility by replacing the null-conditional operator, detailed in pull request #1975. The adjustment ensures the specify CLI runs reliably on older Windows environments without syntax errors.

The toolkit continues to treat specifications as executable assets. Instead of serving as temporary documentation, they directly drive code generation through supported AI models. The specify init command scaffolds projects with flags such as --ai claude, while specify check validates the local environment.

These refinements arrive as more engineering teams integrate multiple AI agents into their workflows. Standardized naming reduces friction when switching between models, and broader shell support expands accessibility for enterprise developers constrained by corporate Windows images.

The release maintains the project's emphasis on predictable outcomes derived from clearly defined product scenarios rather than ad-hoc implementation.

Word count: 178

Use Cases
  • Engineering teams scaffolding apps from executable PRDs
  • Windows developers running Specify CLI in PowerShell 5.1
  • AI engineers unifying skill configs across model providers
Similar Projects
  • aider - offers conversational code editing without formal specs
  • continue-dev - provides IDE AI tools but lacks executable specifications
  • cursor - focuses on inline AI assistance rather than spec-first scaffolding

OpenClaw Adds Approval Hooks for Tool Calls 🔗

Latest release tightens plugin security and expands model integrations

openclaw/openclaw · TypeScript · 342.4k stars 4mo old

OpenClaw v2026.3.28 introduces async requireApproval to before_tool_call hooks, allowing plugins to pause execution and request explicit user consent.

OpenClaw v2026.3.28 introduces async requireApproval to before_tool_call hooks, allowing plugins to pause execution and request explicit user consent. Approvals can arrive through Telegram buttons, Discord interactions, an exec approval overlay, or the universal /approve command on any connected channel.

The update migrates xAI tooling to the Responses API and adds first-class x_search support. The bundled Grok plugin now auto-enables based on web-search configuration, with optional x_search setup offered during openclaw onboard. MiniMax gains an image-01 provider for both image generation and image-to-image editing, including aspect-ratio controls.

Two breaking changes affect long-term users. The deprecated qwen-portal-auth integration has been removed; administrators must migrate to Model Studio API keys. Automatic config migrations for installs older than two months have also been dropped, so legacy keys now fail validation outright.

These changes reflect a maturing focus on controlled execution and provider hygiene while preserving the project's core offer: a single-user AI assistant that operates across WhatsApp, Slack, Discord, Signal, Matrix, Teams and 15 other platforms. It runs on Node 24 (or 22.16+), supports voice on macOS/iOS/Android, and renders live canvases under user control.

The release continues OpenClaw's emphasis on keeping the assistant local, fast, and always-on without relying on third-party orchestration layers.

Use Cases
  • Security teams requiring approval before AI tool execution on Slack
  • Designers generating images with MiniMax through Telegram or Discord
  • Engineers configuring xAI search during onboarding on Linux systems
Similar Projects
  • Open Interpreter - offers local code execution but lacks native multi-channel presence
  • LibreChat - provides multi-model chat UI without deep plugin approval controls
  • AnythingLLM - focuses on document RAG with less emphasis on messaging platform integration

Gemini CLI v0.35.3 Improves Terminal Agent Stability 🔗

Patch release refines reliability for Gemini 3 models and MCP extensions

google-gemini/gemini-cli · TypeScript · 99.6k stars 11mo old

Google has shipped version 0.35.3 of gemini-cli, applying targeted fixes that resolve cherry-pick conflicts from the prior release.

Google has shipped version 0.35.3 of gemini-cli, applying targeted fixes that resolve cherry-pick conflicts from the prior release. The update forms part of the project's established weekly cadence, which delivers preview builds every Tuesday and stable versions shortly after.

The tool provides direct terminal access to Gemini 3 models that support a 1M token context window and enhanced reasoning. This capacity allows developers to work with entire repositories or lengthy technical specifications in one session.

Built-in capabilities include:

  • Google Search grounding for factual accuracy
  • Native file operations
  • Shell command execution
  • Web content retrieval

MCP (Model Context Protocol) support enables custom mcp-client and mcp-server implementations, letting teams connect the agent to internal services or proprietary data sources. The free tier supplies 60 requests per minute and 1,000 daily requests for personal Google accounts.

Installation remains flexible across environments, from npx @google/gemini-cli for instant runs to global npm, Homebrew, MacPorts, or conda-based setups. Under Apache 2.0 licensing, the project continues to evolve as a lightweight yet extensible bridge between command-line workflows and frontier models.

(178 words)

Use Cases
  • Software engineers debugging code with shell command execution
  • Developers analyzing large repositories using 1M token context
  • DevOps teams automating file operations via natural language
Similar Projects
  • aider - integrates AI directly into git-based code editing
  • shell-gpt - routes multiple LLMs through simple terminal commands
  • claude-cli - supplies Anthropic models with comparable tool use

Quick Hits

ColossalAI ColossalAI makes training and deploying massive AI models cheaper, faster and far more accessible for builders. 41.4k
AutoGPT AutoGPT lets builders create autonomous AI agents that pursue complex goals independently using tools and reasoning. 183k
dify Dify provides a production-ready platform for building, deploying and scaling agentic AI workflows. 135.1k
community Kubernetes community repo delivers resources, guides and contribution paths to master and extend container orchestration. 12.8k
notebook Jupyter Notebook enables interactive coding, visualizations and narrative in live, shareable computational documents. 13k

CARLA 0.9.16 Adds NVIDIA AI and UE5.5 Support 🔗

Latest release integrates Cosmos Transfer1 and neural reconstruction for autonomous driving research

carla-simulator/carla · C++ · 13.8k stars Est. 2017 · Latest: 0.9.16

CARLA 0.9.16 introduces native integration with NVIDIA Cosmos Transfer1 and the Neural Reconstruction Engine (NuRec), giving researchers new capabilities to bridge simulated and real-world autonomous driving data.

CARLA 0.9.16 introduces native integration with NVIDIA Cosmos Transfer1 and the Neural Reconstruction Engine (NuRec), giving researchers new capabilities to bridge simulated and real-world autonomous driving data. The update also adds SimReady OpenUSD and MDL converters, enabling import and export of production-grade assets and materials directly into simulation scenes.

Support for left-handed traffic maps expands the platform’s utility beyond right-hand markets, while the ue5-dev branch now targets Unreal Engine 5.5. This requires Ubuntu 22.04 or Windows 11, an Intel i9 or AMD Ryzen 9 CPU, 32 GB RAM and an NVIDIA RTX 3070 or better with at least 16 GB VRAM.

Engineering improvements include corrected waypoint navigation that previously created infinite loops, reliable loading of navigation data when switching maps, and recorder support for vehicle doors. Container users gain GUI forwarding and the ability to mount host Unreal Engine installations. The changelog also standardises scripts on python3 and replaces legacy wget calls with curl.

These changes sharpen CARLA’s role in training deep reinforcement learning and imitation-learning models under consistent, high-fidelity conditions. The existing Python API, ROS bridge and Autonomous Driving Leaderboard continue to operate on top of the upgraded simulation core.

**

Use Cases
  • Training reinforcement learning policies for urban driving
  • Validating sensor fusion pipelines in varied weather
  • Testing traffic scenario compliance with ROS stacks
Similar Projects
  • SVL Simulator - shares Unreal Engine base but lacks native NuRec integration
  • AirSim - focuses on aerial robotics with lighter ground vehicle support
  • Webots - offers cross-platform robotics simulation without CARLA’s driving-specific asset library

More Stories

NiceGUI 3.9.0 Boosts 3D Interaction and Native Support 🔗

Update introduces parallax element, camera controls and window events while fixing security and stability issues

zauberzeug/nicegui · Python · 15.6k stars Est. 2021

NiceGUI has introduced version 3.9.0, featuring new interface elements and improved integration capabilities for Python developers.

NiceGUI has introduced version 3.9.0, featuring new interface elements and improved integration capabilities for Python developers.

The release adds ui.parallax, based on Quasar, allowing developers to create dynamic scrolling backgrounds and layered content. This enhances the visual appeal of web-based applications without requiring frontend frameworks.

For three-dimensional visualizations, ui.scene now supports "trackball" and "map" camera controls. These additions facilitate more natural interaction with complex 3D models, beneficial for simulation and robotics applications.

Native mode gains support for window events including shown, resized and file drop through app.native. Developers can now respond directly to these system-level actions in their Python code.

Security receives attention with measures to prevent memory exhaustion from media streaming routes. This patch mitigates risks in applications handling user-uploaded or streamed content.

Multiple bug fixes address session storage conflicts with FastAPI, log scrolling issues in Firefox, table header animations and page navigation problems. Compatibility with PyInstaller has also been improved.

These updates come as Python continues to gain traction for full-stack development. NiceGUI's approach of running a webserver that the browser accesses, or operating in native desktop windows, provides flexibility for different deployment scenarios. The implicit reload on code changes speeds up the iteration process significantly during development.

Use Cases
  • Robotics developers creating real-time web-based control interfaces with Python
  • Data scientists building interactive dashboards for machine learning experiments
  • Smart home developers creating custom automation management interfaces using Python
Similar Projects
  • Streamlit - simpler data app framework with less focus on 3D scenes
  • Gradio - ML model demo tool compared to NiceGUI's general UI elements
  • Dash - more complex enterprise dashboards versus NiceGUI's Pythonic simplicity

Openpilot v0.11 Advances Simulator-Trained Driving Model 🔗

Release delivers improved longitudinal control, major power savings and two new vehicle platforms

commaai/openpilot · Python · 60.5k stars Est. 2016

openpilot v0.11.0 introduces a driving model fully trained inside a learned simulator, a shift that has produced measurable gains in longitudinal performance within Experimental mode.

openpilot v0.11.0 introduces a driving model fully trained inside a learned simulator, a shift that has produced measurable gains in longitudinal performance within Experimental mode.

The new model replaces earlier versions trained primarily on real-world data. According to the release notes, the simulator-trained approach has tightened acceleration and braking response, particularly in stop-and-go traffic. Standby power consumption on the comma four has also been cut by 77 percent to 52 mW, extending the time the device can remain connected without draining the vehicle battery.

Community contributors added support for the Kia K7 2017 and Lexus LS 2018, increasing the total number of officially supported vehicles beyond 300. Installation on a comma four remains straightforward: users point the device to openpilot.comma.ai during setup and attach the appropriate car harness.

The project continues to ship four main branches. The release-mici branch delivers stable updates, while nightly-dev includes experimental features for select models. All code is written in Python and developed openly on GitHub, with comma maintaining the core stack and individual contributors adding car-specific integrations.

The release reflects openpilot’s maturing architecture: an operating system for robotics that incrementally replaces factory driver-assistance software rather than bolting new hardware on top of it.

(178 words)

Use Cases
  • Engineers upgrading ADAS on 300 supported car models
  • Developers training driving policies inside learned simulators
  • Comma four owners reducing device standby power consumption
Similar Projects
  • Autoware - full-stack autonomy stack versus openpilot's targeted ADAS replacement
  • Baidu Apollo - enterprise-grade self-driving platform with heavier hardware requirements
  • ROS - general robotics framework that openpilot extends for production vehicles

Quick Hits

webots Webots is a full-featured 3D robot simulator for modeling, programming, and testing complex robotic systems in realistic environments. 4.2k
kornia Kornia delivers differentiable geometric computer vision primitives for PyTorch, powering advanced spatial AI and vision research. 11.1k
ardupilot ArduPilot supplies production-grade autopilot source for planes, copters, rovers, and subs, enabling real-world autonomous vehicle control. 14.8k
cloisim CLOiSim rapidly spins up multi-robot Unity3D simulations from SDF files with seamless ROS2 integration for robotics testing. 171
ros2_documentation The ROS 2 docs repository provides comprehensive guides and references to accelerate framework mastery and robotics development. 868
rerun An open source SDK for logging, storing, querying, and visualizing multimodal and multi-rate data 10.5k

Juice Shop Release Refines Build Automation Pipeline 🔗

Version 19.2.1 automates challenge updates and fixes bundle analysis generation

juice-shop/juice-shop · TypeScript · 12.8k stars Est. 2014 · Latest: v19.2.1

The OWASP Juice Shop has shipped version 19.2.1 with incremental but meaningful changes to its release engineering.

The OWASP Juice Shop has shipped version 19.2.1 with incremental but meaningful changes to its release engineering. The new build process now automatically updates coding challenge snippets in the companion website repository, removing a manual step that previously consumed maintainer time. It also corrects the generation of frontend bundle analysis diagrams, improving visibility into the application's JavaScript payload size.

Written in TypeScript, the project continues to function as a fully-featured insecure web application that mirrors real-world flaws. It implements the complete OWASP Top 10 alongside additional vulnerabilities such as improper input validation, broken access control and insecure deserialization. These are delivered through a realistic online shop interface that supports both guided learning and free-form exploitation.

Setup options remain practical. Developers can git clone the repository, run npm install followed by npm start, or use the 64-bit packaged distributions that bundle native binaries for SQLite and libxmljs2. Docker images and Vagrant configurations allow consistent deployment across training environments.

These maintenance improvements arrive as web application security threats remain a primary attack vector. By smoothing the release pipeline, the project lowers the barrier for contributors to add new challenges and keeps the tool current with evolving Node.js ecosystems. The result is a more sustainable platform for ongoing security education and tooling validation.

Changes in v19.2.1

  • Automated synchronization of challenge snippets
  • Fixed frontend bundle analysis diagram creation
  • Updated release workflows for long-term maintainability
Use Cases
  • Security trainers demonstrate OWASP Top 10 flaws to developers
  • Pentesters evaluate scanning tools against realistic web vulnerabilities
  • CTF organizers host hacking competitions in controlled environments
Similar Projects
  • WebGoat - Java-based vulnerable app with structured learning paths
  • DVWA - PHP implementation focused on common web application flaws
  • Security Shepherd - OWASP-aligned training platform with progressive challenges

More Stories

Community Proxmox Scripts Add Bambuddy in New Release 🔗

Latest update includes breaking rename of BirdNET to BirdNET-Go for improved accuracy

community-scripts/ProxmoxVE · Shell · 27.4k stars Est. 2024

Proxmox VE administrators gained access to an additional deployment script with the project's most recent release. The update on March 30 added Bambuddy to the collection of supported applications and implemented a rename from BirdNET to BirdNET-Go.

This breaking change requires existing users to update any references to the old script name in their workflows.

Proxmox VE administrators gained access to an additional deployment script with the project's most recent release. The update on March 30 added Bambuddy to the collection of supported applications and implemented a rename from BirdNET to BirdNET-Go.

This breaking change requires existing users to update any references to the old script name in their workflows. The adjustment reflects upstream developments in the bird identification software.

The community-scripts/ProxmoxVE repository offers Shell scripts for creating optimized LXC containers and virtual machines on Proxmox VE 8.4.x through 9.1.x. Supported operating systems include Debian, Ubuntu and Alpine, with ready-made scripts for Home Assistant, Docker, network tools, security solutions and smart-home services.

Installation has been streamlined through the web interface at community-scripts.org or by adding a local menu directly in the Proxmox UI. The local option uses this one-line command:

bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/pve-scripts-local.sh)"

Community members continue to contribute new scripts and maintain existing ones, providing both simple and advanced configuration modes, automatic updates, and post-install troubleshooting tools. Regular security patches and performance optimizations keep the scripts relevant for production self-hosting and homelab environments. Active Discord and GitHub channels support users with feature requests, bug reports and implementation guidance.

Use Cases
  • Homelab admins automate LXC container deployment for various services
  • Self-hosters install and configure Home Assistant on Proxmox VE
  • Network engineers set up secure Docker environments using scripts
Similar Projects
  • awesome-homelab - curates resources without providing installation scripts
  • ansible-proxmox - uses configuration management instead of one-click bash scripts
  • proxmox-kubernetes - specializes in Kubernetes deployment rather than general app scripts

Berty Refines Offline Peer-to-Peer Secure Messenger 🔗

Version 2.471.2 improves development workflows for the censorship-resistant application built on libp2p and IPFS

berty/berty · TypeScript · 9.1k stars Est. 2018

Berty has released v2.471.2, addressing GitHub workflow issues and sustaining development of its mature zero-trust messaging platform.

Berty has released v2.471.2, addressing GitHub workflow issues and sustaining development of its mature zero-trust messaging platform. First created in 2018, the project continues refining a system designed to operate with or without internet, cellular coverage or any trusted intermediary.

The app delivers end-to-end encrypted messaging by default while collecting minimal metadata. Users create accounts without phone numbers or email addresses. Local discovery relies on Bluetooth Low Energy and mDNS, enabling fully offline conversations. When connectivity exists, libp2p, IPFS, and CRDTs provide distributed storage and conflict-free replication across peers.

The monorepo contains a React Native mobile client written in TypeScript targeting both Android and iOS, alongside Go components using gomobile. Developers can run berty mini for a lightweight CLI messenger or berty daemon to operate a full node exposing the Wesh Protocol API.

As governments increase network monitoring and impose shutdowns, Berty's serverless architecture offers concrete resilience. Messages remain encrypted and synchronized even on adversarial networks. The French nonprofit steward continues balancing usability with uncompromising decentralization, keeping the codebase fully open for audit and contribution.

The latest maintenance release, while technical, confirms the project's ongoing commitment to privacy tools that function precisely when conventional messengers cannot.

Use Cases
  • Activists exchanging data in censored regions without internet
  • Travelers sharing sensitive information over untrusted networks
  • Communities communicating in areas with no cellular coverage
Similar Projects
  • Briar - offers comparable Bluetooth-based offline messaging for high-risk users
  • Jami - provides distributed P2P communication without central servers
  • Cwtch - focuses on metadata-resistant group chat using decentralized routing

Quick Hits

maigret Maigret builds detailed personal dossiers from 3000+ sites using only a username, powering fast OSINT investigations. 19.3k
sherlock Sherlock hunts down social media accounts by username across hundreds of networks, streamlining target discovery. 75k
wazuh Wazuh delivers unified open-source XDR and SIEM protection for endpoints and cloud workloads. 15.1k
caddy Caddy serves as a fast extensible multi-platform web server with automatic HTTPS out of the box. 71.2k
opennhp OpenNHP provides a lightweight crypto toolkit to enforce Zero Trust security for infrastructure, apps, and data. 13.8k
SWE-agent SWE-agent takes a GitHub issue and tries to automatically fix it, using your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2024] 18.9k

Ladybird Refines Sandboxed Rendering for Independent Web Engine 🔗

Multi-process architecture isolates tabs and untrusted operations as project advances beyond SerenityOS foundations

LadybirdBrowser/ladybird · C++ · 61.9k stars Est. 2024

Ladybird is maturing its novel browser engine at a time when the web risks monoculture. The project’s deliberate separation of concerns across multiple processes demonstrates a serious engineering approach to security and reliability.

The architecture features a main UI process coordinating several WebContent renderer processes.

Ladybird is maturing its novel browser engine at a time when the web risks monoculture. The project’s deliberate separation of concerns across multiple processes demonstrates a serious engineering approach to security and reliability.

The architecture features a main UI process coordinating several WebContent renderer processes. Each tab runs in its own sandboxed renderer, preventing a compromise in one page from affecting the rest of the system. Image decoding and network requests are handled by dedicated ImageDecoder and RequestServer processes, keeping potentially dangerous operations outside the main browser context.

This design inherits core components from SerenityOS while steadily evolving them for standalone use. LibWeb serves as the complete web rendering engine, LibJS executes JavaScript, and LibWasm provides WebAssembly capabilities. Supporting libraries include LibCrypto and LibTLS for cryptography, LibHTTP for HTTP/1.1, LibGfx for graphics and image handling, LibUnicode for text processing, and LibMedia for audio and video. LibCore supplies the event loop and OS abstraction, while LibIPC manages communication between processes.

Platform support now spans Linux, macOS, Windows via WSL2, and various other Unix-like systems. The project remains in pre-alpha, intended for developers and contributors rather than general users. Documentation within the repository details both code structure and contribution processes.

Recent development has focused on strengthening sandbox boundaries and reducing reliance on shared SerenityOS code. This evolution matters because independent implementations help preserve the openness of web standards. When one or two engines dominate, subtle incompatibilities and policy decisions can shape the entire web.

Builders working on web technologies benefit from studying Ladybird’s clean separation of rendering, networking, and UI layers. The explicit process boundaries offer a practical case study in defensive browser architecture against malicious content.

Participation is structured through Discord discussions and clearly documented contribution guidelines. New contributors are expected to read CONTRIBUTING.md, the issue policy, and detailed reporting guidelines before submitting work.

Ladybird demonstrates that creating a genuinely independent browser remains possible with focused effort and modern systems programming. Its progress underscores the value of multiple viable web engines in maintaining a healthy, standards-compliant web ecosystem.

(378 words)

Use Cases
  • Security researchers analyzing per-tab sandboxing
  • Developers implementing web standards in C++
  • Platform engineers porting browsers to new OSes
Similar Projects
  • Servo - builds a parallel from-scratch engine in Rust with emphasis on parallelism
  • NetSurf - maintains a lightweight independent browser and rendering engine
  • Dillo - focuses on minimal, fast browsing with its own non-mainstream engine

More Stories

Uv Refines Self-Update Reliability for Python Workflows 🔗

Version 0.11.2 improves mirror fetching, Windows error handling, and quiet-mode feedback

astral-sh/uv · Rust · 82.4k stars Est. 2023

Astral has released uv 0.11.2, sharpening the self-update process in its Rust-based Python package and project manager.

Astral has released uv 0.11.2, sharpening the self-update process in its Rust-based Python package and project manager. The uv self update command now fetches the manifest from the mirror first and uses uv’s own reqwest client, delivering clearer success and failure messages even when --quiet is specified.

On Windows, the update adds a dedicated PE editing error type, giving developers more precise diagnostics during binary operations. A preview feature evaluates extras and groups when determining auditable packages, strengthening supply-chain security checks. A separate fix removes redundant project configuration parsing for uv run, reducing overhead on repeated script execution.

These changes refine an already comprehensive tool that unifies dependency resolution, virtual environment management, and Python version handling. uv maintains a universal lockfile format, supports Cargo-style workspaces, and provides a pip-compatible interface that runs 10-100x faster than the original. Its global cache deduplicates packages across projects, conserving disk space on developer machines and CI runners alike.

The release continues uv’s steady evolution since its 2023 debut, focusing on operational robustness rather than flashy new features. Installation remains unchanged—via standalone curl or pip—and the binary can still update itself without external toolchains.

Key enhancements in 0.11.2

  • Mirror-first manifest fetching for uv self update
  • Dedicated Windows PE editing error reporting
  • Preview support for extras and groups in audit paths
Use Cases
  • Engineers accelerating dependency resolution in CI pipelines
  • Maintainers managing multi-package workspaces with lockfiles
  • Developers executing scripts with inline dependency metadata
Similar Projects
  • Rye - similar all-in-one approach but slower resolver
  • Poetry - comparable project management with Python-based performance
  • PDM - fast packaging tool lacking uv's universal lockfile

Meilisearch Adds Dynamic Search Rules With Pinning 🔗

Latest release enables condition-based promotion of content in results

meilisearch/meilisearch · Rust · 56.8k stars Est. 2018

Meilisearch has introduced Dynamic Search Rules in version 1.41.0.

Meilisearch has introduced Dynamic Search Rules in version 1.41.0. The experimental feature lets developers promote specific documents based on query conditions or time windows without changing core indexes.

Rules trigger when queries contain defined substrings or during scheduled periods. Multiple documents can be pinned in exact order at any position in the results list. The system integrates cleanly with existing capabilities: filtering, pagination, facet distribution, hybrid search, and federated search.

Developers create and manage rules through the API. A rule is defined with PATCH /dynamic-search-rules/{uid} using JSON that specifies conditions and pinning actions. Rules are removed with DELETE /dynamic-search-rules/{uid}. The approach gives teams precise control over promotional content, seasonal highlights, or editorial priorities.

The Rust engine maintains its signature performance, typically returning results in under 50 milliseconds. It continues to combine semantic and full-text search while offering typo tolerance, geosearch, synonym support, and multi-tenancy controls.

This update addresses a practical need for context-aware result shaping that previously required application-layer workarounds. For teams already running Meilisearch in production, the new rules provide a native, declarative way to influence relevance without sacrificing speed or consistency.

Use Cases
  • Ecommerce developers implement faceted product filtering and sorting
  • Movie platforms deliver hybrid search across streaming options
  • Multi-tenant SaaS teams search contacts deals and companies
Similar Projects
  • Typesense - matches speed and typo tolerance with simpler setup
  • Elasticsearch - provides broader analytics at greater operational cost
  • Qdrant - specializes in vectors while lacking Meilisearch's full-text focus

Linux Kernel Documentation Adds AI Coding Assistants 🔗

Updated contributor guide recognizes LLMs as legitimate participants in kernel workflows

torvalds/linux · C · 225.8k stars Est. 2011

The Linux kernel project has expanded its official documentation to explicitly address AI Coding Assistants using LLMs and related tools. This change reflects the growing presence of automated assistance in what remains the most scrutinized codebase in open source.

The updated README now lists AI roles alongside new developers, security experts, and hardware vendors.

The Linux kernel project has expanded its official documentation to explicitly address AI Coding Assistants using LLMs and related tools. This change reflects the growing presence of automated assistance in what remains the most scrutinized codebase in open source.

The updated README now lists AI roles alongside new developers, security experts, and hardware vendors. It directs them to the same foundational materials: the development process, patch submission rules, coding style guidelines, and core API documentation. All are expected to follow the established code of conduct and licensing terms in the COPYING file.

Recent commits, with the latest push on March 30 2026, show the tree remains under active maintenance. The addition of AI guidance acknowledges that large language models are already being used for initial code generation, bug triage, and documentation work, while preserving the human-led review process that has defined kernel quality for decades.

Security professionals and academic researchers retain dedicated sections on hardening, vulnerability analysis, and architectural internals. The kernel itself continues to manage hardware resources and provide the fundamental services that power everything from embedded devices to the world's largest AI training clusters.

This pragmatic update keeps the project aligned with contemporary development realities without relaxing its technical standards.

Use Cases
  • AI assistants generating initial patches for subsystem maintainers
  • Security experts using LLMs for kernel vulnerability analysis
  • Hardware vendors leveraging AI tools when writing new drivers
Similar Projects
  • FreeBSD - alternative Unix kernel with different licensing model
  • OpenBSD - security-focused kernel emphasizing proactive code auditing
  • NetBSD - portability-oriented kernel targeting diverse hardware

Quick Hits

alacritty Blazing-fast GPU-accelerated terminal emulator using OpenGL for smooth cross-platform performance. 63.2k
codex Lightweight Rust coding agent that runs in your terminal to generate and debug code on the fly. 68.5k
awesome-rust Curated catalog of Rust crates, tools, and resources to accelerate any builder's next project. 56.5k
openssl Battle-tested C library delivering TLS, SSL, and comprehensive crypto primitives for secure apps. 29.9k
ollama Effortlessly run Kimi-K2.5, DeepSeek, Qwen, Gemma and other LLMs locally with one command. 166.5k

Venus OS Serial Battery Driver Receives Major v2.0 Overhaul 🔗

Breaking configuration changes refine charge control and temperature handling for precise DIY energy storage integration

mr-manuel/venus-os_dbus-serialbattery · Python · 224 stars Est. 2023 · Latest: v2.0.20250729

dbus-serialbattery has long served as the bridge between third-party Battery Management Systems and Victron Energy's Venus OS GX platforms. With the release of v2.0.

dbus-serialbattery has long served as the bridge between third-party Battery Management Systems and Victron Energy's Venus OS GX platforms. With the release of v2.0.20250729, the driver maintained by mr-manuel introduces several breaking changes that demand attention from builders running custom lithium installations.

The project enables any Venus OS device—whether an official Victron GX or a Raspberry Pi—to communicate with BMS units supporting RS232, RS485, TTL UART, and Bluetooth. It collects real-time data and publishes it to the system dbus, allowing the driver to function as a native Battery Monitor. Inverters and chargers then receive accurate State of Charge (SoC), voltage, current, and temperature values needed for safe operation.

Originally released by Louisvdw in September 2020, the driver saw mr-manuel assume primary maintenance in February 2023 with the v1.0.0 milestone. He has since handled the majority of GitHub issues while steadily expanding supported hardware. The latest v2.0 series represents the most substantial architectural refinement since he took over.

The update focuses on more precise charge management based on individual cell behavior rather than pack-level averages. Key breaking changes in config.default.ini include:

  • SOC_RESET_VOLTAGE replaced by SOC_RESET_CELL_VOLTAGE
  • TEMPERATURE_SOURCE_BATTERY now accepts a list of temperature sensors
  • CELL_VOLTAGE_DIFF_KEEP_MAX_VOLTAGE_TIME_RESTART superseded by SWITCH_TO_FLOAT_CELL_VOLTAGE_DEVIATION
  • LINEAR_LIMITATION_ENABLE and related parameters replaced by the new CHARGE_MODE system
  • Multiple timing and deviation constants renamed to clarify their roles in bulk-to-float transitions

These modifications improve how the driver manages Constant Voltage Limitation (CVL) and switching between charge stages. The new CVL_CONTROLLER_MODE and CVL_RECALCULATION_EVERY settings give builders finer control over how aggressively the system limits current as the battery approaches full charge.

For developers working locally, the project depends on velib_python, which ships with Venus OS but requires explicit PYTHONPATH configuration when testing outside the target environment. Comprehensive documentation covers supported BMS models, wiring diagrams for serial connections, installation procedures, and troubleshooting steps for common UART and Bluetooth issues.

The changes matter now because DIY lithium battery deployments continue growing in off-grid and hybrid solar applications. As cell counts increase and safety requirements tighten, the ability to base control decisions on per-cell data rather than aggregate values becomes essential for both performance and longevity. Builders should audit their existing configurations before upgrading to avoid unexpected behavior in charge profiles.

Support for the project is welcomed through donations, as maintaining compatibility across dozens of BMS variants requires continuous testing and updates.

(Word count: 378)

Use Cases
  • Off-grid users integrating JBD or JK BMS packs
  • Solar installers connecting RS485 batteries to GX devices
  • Developers adding Bluetooth BMS support to Venus OS
Similar Projects
  • louisvdw/dbus-serialbattery - Original implementation now evolved and maintained by mr-manuel
  • canopen-bms - Alternative driver focused on CAN bus battery communication
  • modbus-battery - Targets Modbus TCP batteries rather than serial protocols

More Stories

Photobooth App Adds Rclone Sync in v8.7.0 🔗

Latest release simplifies file synchronization across operating systems and devices

photobooth-app/photobooth-app · Python · 254 stars Est. 2022

photobooth-app has updated to v8.7.0, its final feature release in the v8 series.

photobooth-app has updated to v8.7.0, its final feature release in the v8 series. The main addition is a new Rclone-based synchronization tool that replaces several older transfer methods.

The backend now depends on the rclone-bin-api package, which bundles current Rclone binaries for Windows, macOS, Linux x64 and ARM. This removes the need for manual Rclone installation and simplifies deployment on Raspberry Pi and desktop systems alike. Existing QR share, FTP and Nextcloud options remain for now but are expected to be phased out.

Other changes improve reliability. Services can now signal permanent crashes for illegal configurations, preventing repeated failed recovery attempts. Multicamera setups gain TCP keepalive support through pynng, fixing dropped connections during long stills sessions. The frontend renames the download portal to sharepage and includes minor visual tweaks alongside updated dependencies.

The Python application with Vue3 interface captures stills, animated GIFs, collages, boomerangs and 3D wigglegrams. It supports DSLR cameras via gPhoto2, Raspberry Pi Camera modules, webcams, and combinations of these. Live preview runs during countdowns, while WLED integration drives LED rings for visual feedback. 3D-printable enclosure designs and MIT licensing continue to support the DIY community.

**

Use Cases
  • Wedding planners building Raspberry Pi guest photo stations
  • Makers assembling 3D-printed multi-camera collage booths
  • Event hosts integrating WLED countdowns with DSLR capture
Similar Projects
  • pibooth/pibooth - plugin-based but lacks Vue3 frontend and Rclone sync
  • mfts/photobooth - simpler PiCamera focus without multi-camera or wigglegram support
  • openphotobooth - web-first design missing native gPhoto2 DSLR integration

HackRF Release Fixes Mixer Lock and Expands Flash 🔗

Version v2026.01.3 resolves frequency failures while adding larger SPI storage for Pro users

greatscottgadgets/hackrf · C · 7.8k stars Est. 2012

HackRF has received a focused maintenance release that improves core radio performance. Version v2026.01.

HackRF has received a focused maintenance release that improves core radio performance. Version v2026.01.3 corrects mixer frequency lock failures that previously caused intermittent instability during transmission and reception. The fix delivers more predictable behaviour across the device's operating range, particularly important for applications requiring sustained carrier stability.

The update also grants access to a larger SPI flash on the HackRF Pro. This change allows users to store more complex firmware images and capture bigger datasets directly on the hardware without external memory cards.

The platform, written in C, continues to provide both open hardware designs and supporting software for software-defined radio work. Documentation remains available on Read the Docs, with source files in the repository's docs folder. Local PDF builds on Ubuntu require latexmk and texlive-latex-extra, followed by the standard make latex and make latexpdf sequence.

Contributors are encouraged to submit documentation improvements through pull requests. Users seeking help should first consult the troubleshooting page, then open GitHub issues. The project maintains Discord channels for discussion, though response times for technical support labelled issues average two weeks.

These incremental changes demonstrate the project's ongoing commitment to reliability for its established user base of RF engineers and researchers.

Use Cases
  • Security researchers auditing wireless protocol vulnerabilities in the field
  • Hardware developers prototyping custom RF communication systems in labs
  • Engineers performing spectrum analysis on legacy radio equipment
Similar Projects
  • rtl-sdr - lower-cost receive-only alternative without transmit capability
  • bladeRF - FPGA-based SDR offering faster processing at higher price
  • LimeSDR - wider bandwidth platform with more complex configuration

Node Feature Discovery Adds ppc64le and s390x Support 🔗

Version 0.18.3 delivers official multi-arch images and kubectl plugin fixes

kubernetes-sigs/node-feature-discovery · Go · 1k stars Est. 2016

Kubernetes node management takes another step toward hardware agnosticism with the latest Node Feature Discovery release. Version 0.18.

Kubernetes node management takes another step toward hardware agnosticism with the latest Node Feature Discovery release. Version 0.18.3 introduces official support for ppc64le and s390x, complete with dedicated container images from the Kubernetes registry.

This change allows operators of IBM Power and z/Architecture systems to automatically discover and label node features. The add-on scans for CPU capabilities, such as instruction set extensions, and applies structured labels for use in scheduling decisions.

Installation uses familiar tools. The Helm chart or Kustomize overlay now pulls the multi-arch images by default when using v0.18.3.

Example labels include feature.node.kubernetes.io/cpu-cpuid.ADX and similar entries for AESNI and other features. These enable fine-grained control over where pods run.

Additionally, the release corrects the "test" subcommand in the accompanying kubectl-nfd plugin, streamlining validation of feature rules.

The update matters as organizations increasingly deploy Kubernetes across varied server architectures. Whether in on-premises data centers or hybrid cloud setups, consistent feature detection simplifies operations.

Users can define custom feature rules using configuration files, extending beyond built-in detectors for CPUID and RDT. This flexibility has made the tool popular in high-performance computing and machine learning clusters where specific hardware instructions can significantly impact performance.

Use Cases
  • Cluster operators label diverse architecture nodes for scheduling
  • ML teams match workloads to specific CPU instruction sets
  • Enterprise admins verify hardware features on IBM systems
Similar Projects
  • nvidia/gpu-feature-discovery - restricts detection to NVIDIA GPU attributes
  • intel/intel-device-plugins-for-kubernetes - targets Intel CPUs and accelerators specifically
  • kubevirt/node-labeller - optimizes feature detection for virtual machine workloads

Quick Hits

glasgow Glasgow turns your PC into the Scots Army Knife of electronics, letting you probe, program, and debug almost any chip with Python. 2.1k
librealsense Librealsense gives C++ developers full control of Intel RealSense cameras for depth sensing, 3D scanning, and advanced computer vision. 8.6k
micrOS micrOS is a tiny asynchronous OS that brings Python-powered automation and multitasking to resource-constrained DIY microcontroller projects. 133
hwloc hwloc maps hardware topology so you can optimize code for CPUs, caches, NUMA nodes, and devices with surgical precision. 684
tulipcc TulipCC is a portable Python creative computer that lets you compose music, generate graphics, and build interactive synth art anywhere. 860

bgfx Endures as Graphics Abstraction Layer for Modern Projects 🔗

Veteran library's API-agnostic approach gains new relevance with WebGPU support and platform expansion

bkaradzic/bgfx · C++ · 16.9k stars Est. 2012

bgfx continues to solve one of the most persistent problems in graphics programming: the fragmentation of rendering APIs across platforms and vendors. Fourteen years after its creation, the C++ library still lets developers write rendering code once and target everything from high-end PC GPUs to mobile devices and the web without rewriting core drawing logic.

The project's "Bring Your Own Engine/Framework" design remains its defining characteristic.

bgfx continues to solve one of the most persistent problems in graphics programming: the fragmentation of rendering APIs across platforms and vendors. Fourteen years after its creation, the C++ library still lets developers write rendering code once and target everything from high-end PC GPUs to mobile devices and the web without rewriting core drawing logic.

The project's "Bring Your Own Engine/Framework" design remains its defining characteristic. Rather than imposing a full engine architecture, bgfx functions as a thin abstraction layer. Programmers manage their own asset pipelines, scene graphs and memory systems while calling into a consistent API for buffer creation, shader handling and draw submission. The library then translates these calls to the appropriate backend at compile time or runtime.

Supported rendering backends now span the current landscape of graphics technology:

  • Direct3D 11 and Direct3D 12
  • Metal
  • OpenGL 2.1 through 3.1+, OpenGL ES 2 and 3.1
  • Vulkan
  • WebGL 1.0 and 2.0
  • WebGPU via Dawn Native

Platform coverage is equally comprehensive, including Android 4.0+, iOS and iPadOS 16.0+, Linux, macOS 13.0+, PlayStation 4, Raspberry Pi, Universal Windows Platform, WebAssembly/Emscripten and Windows 7 and newer. This breadth matters particularly now as teams ship simultaneously to desktop, console, mobile and browser targets.

Recent updates have kept the project aligned with emerging standards. WebGPU support positions bgfx to take advantage of the new web graphics API while maintaining compatibility with established backends. The library continues to support current compilers including Clang 11+, GCC 11+, VS2022 and Apple Clang 12+.

Production use cases demonstrate its practical value. Carbon Games built AirMech Strike on bgfx for its cross-platform needs. The Crown engine uses it as its rendering foundation, while cmftStudio leverages the library for cubemap processing tools. These implementations show how bgfx enables sophisticated graphics without forcing teams into monolithic engine frameworks.

For builders, the technical benefit is clear: reduced platform-specific code, consistent performance characteristics across backends, and the ability to iterate on rendering techniques without being locked to a single API. As new graphics standards emerge and hardware vendors continue diverging, bgfx's maintained abstraction layer provides a stable foundation that lets developers focus on creating experiences rather than fighting API differences.

The library's ongoing evolution proves that thoughtful abstraction layers can outlast individual graphics APIs. In an industry prone to chasing the newest technology, bgfx's steady approach offers a pragmatic path for shipping reliable, portable rendering code.

Use Cases
  • Custom engine developers targeting Windows Linux and consoles
  • Tool makers building cross-platform visualization and editing software
  • Game studios shipping titles to mobile web and desktop simultaneously
Similar Projects
  • Filament - Delivers cross-platform rendering with stronger emphasis on physically based materials and high visual fidelity
  • Diligent Engine - Provides similar API abstraction but includes more built-in pipeline state management utilities
  • Sokol - Supplies a minimalist single-header alternative focused on simplicity over extensive backend feature parity

More Stories

Dear ImGui Ships v1.92.6 With Refined Tooling 🔗

Latest release sustains immediate-mode efficiency for C++ debug and content tools

ocornut/imgui · C++ · 72.3k stars Est. 2014

Dear ImGui has released version 1.92.6, the first major update following last year's 10th anniversary of v1.

Dear ImGui has released version 1.92.6, the first major update following last year's 10th anniversary of v1.00. The new version refines existing systems rather than introducing sweeping changes, with particular attention to stability and feature discoverability for long-term users.

The library continues to output optimized vertex buffers that integrate directly into any 3D rendering pipeline. Its immediate-mode design eliminates the need to maintain separate UI state, reducing synchronization bugs that commonly appear in tool development. Developers report integrating basic functionality in roughly 25 lines of code within established codebases.

Core strengths remain unchanged: minimal dependencies, renderer agnosticism, and explicit focus on programmer tools rather than end-user applications. The release notes urge users to read the full changelog, noting that many teams overlook useful widgets added over the past decade.

Funding remains a central theme. The maintainers highlight invoiced sponsorship options for companies using the library in commercial engines and embedded systems, seeking sustainable support beyond individual donations. Multi-platform consistency across Windows, macOS, Linux and console environments keeps it relevant for teams shipping tools alongside their products.

The update reinforces Dear ImGui's position as reliable infrastructure rather than a flashy newcomer, delivering predictable performance where it is needed most.

Use Cases
  • Game developers integrating debug tools into 3D engines
  • Software engineers building visualization interfaces for real-time applications
  • C++ programmers creating content creation tools in game studios
Similar Projects
  • Nuklear - matches immediate-mode approach but targets pure C
  • egui - implements similar paradigm for Rust game development
  • Qt - supplies retained-mode widgets with greater complexity and dependencies

Godot Dialogue Manager Updated With Static ID Support 🔗

Version 3.10.2 enhances reliability and flexibility for nonlinear dialogue in Godot 4

nathanhoad/godot_dialogue_manager · GDScript · 3.4k stars Est. 2022

Godot developers have a new reason to revisit their dialogue systems with the release of godot_dialogue_manager version 3.10.2.

Godot developers have a new reason to revisit their dialogue systems with the release of godot_dialogue_manager version 3.10.2.

The update introduces an option to generate static IDs for the current file. This helps maintain consistent line references as projects evolve, reducing errors in save systems and translations.

Another notable change allows for different indentation styles. Developers can now align the tool with their preferred code formatting without compromising functionality.

The release focuses heavily on stability with several fixes. These include improvements to the find function in the dialogue panel, the C# mutation handler, null checks for autoloads in symbol lookup, and character name completions that previously included commented lines.

A bug in the _check_condition() function was also resolved, preventing crashes with non-primitive values.

These enhancements matter because reliable dialogue tools are essential for the growing number of narrative-focused titles built with the Godot engine. The addon provides a stateless branching dialogue editor and runtime, allowing creators to write in a script-like manner and integrate easily into their games using GDScript or C#.

It supports advanced features such as conditions, mutations, and dialogue balloons. As Godot gains traction among indie developers, maintaining such specialized addons becomes increasingly important for efficient content creation.

The project continues to see contributions from the community, with a new contributor submitting a pull request in this release.

Use Cases
  • Indie developers script branching narratives for Godot powered games
  • RPG makers implement conditional dialogue with mutations and variables
  • Narrative designers integrate custom balloons into Godot interactive stories
Similar Projects
  • Dialogic - provides visual node editor instead of script syntax
  • YarnSpinner-Godot - ports established narrative language to Godot runtime
  • Godot Ink - leverages Ink language for more complex dialogue logic

Flame Engine Ships 1.36.0 With Key Fixes 🔗

Update resolves issues in hitboxes, text components and initialization methods

flame-engine/flame · Dart · 10.5k stars Est. 2017

Flame, the established Flutter-based game engine, has released version 1.36.0, delivering a series of targeted fixes to improve stability and functionality.

Flame, the established Flutter-based game engine, has released version 1.36.0, delivering a series of targeted fixes to improve stability and functionality.

Among the changes, the update corrects ray direction normalization drift that affected accuracy in ray casting operations. CircleComponent now properly initializes its center offset in the constructor, while IconComponent resolves fontPackage handling during rasterization.

Hitbox calculations have been updated to correctly factor in parent scale and rotation, addressing a significant source of bugs in games with dynamic objects. The buildContext becomes available during onLoad and onMount methods, allowing developers to access context earlier in the component lifecycle.

Text rendering sees improvements with added CJK wrapping support in TextBoxComponent. The release also prevents removed children from being erroneously re-added when their parent component is moved.

These enhancements build on Flame's core offerings, which include a complete game loop, a component/object system, collision detection, gesture and input handling, and support for images, animations, sprites and particles. The engine integrates seamlessly with Flutter, providing utilities that simplify game development.

Bridge packages extend its capabilities to include audio playback, state management with Bloc, and other integrations, making it a comprehensive solution for Flutter game developers.

Community resources include extensive documentation, examples, tutorials and an active Discord server for support.

Use Cases
  • Independent developers creating cross-platform 2D titles with Flutter
  • Professional studios integrating audio and state management in games
  • Educators teaching game development through interactive Flutter examples
Similar Projects
  • Godot - offers broader 2D/3D support with its own editor and scripting
  • Unity - provides visual tools and 3D capabilities but with licensing costs
  • Phaser - delivers web-focused JavaScript framework for HTML5 browser games

Quick Hits

Pixelorama Pixelorama equips builders with a full-featured open-source pixel art suite for sprites, tilesets, animations and more across desktop and web. 9.3k
GodSVG GodSVG delivers precise structured SVG editing in a cross-platform vector graphics editor available on desktop and web. 2.4k
bevy Bevy gives Rust developers a refreshingly simple data-driven game engine focused on clean architecture and fast iteration. 45.4k
Godot-Game-Template This Godot template provides production-ready menus, pause systems, scene loading and tools so you can skip boilerplate and start building. 1.3k
escoria-demo-game Escoria's demo game showcases a complete point-and-click adventure framework for creating narrative-driven interactive stories in Godot. 839