Results for: "llm"
Keyword Search 9 results
K-Search: LLM Kernel Generation via Co-Evolving Intrinsic World Model
THE GIST: K-Search uses a co-evolving world model to optimize GPU kernels for machine learning, outperforming existing methods.
FastFlowLM: Run LLMs on AMD Ryzen AI NPUs Without a GPU
THE GIST: FastFlowLM enables running large language models on AMD Ryzen AI NPUs, offering faster and more power-efficient performance without requiring a dedicated GPU.
OnGarde: Runtime Security for Self-Hosted AI Agents
THE GIST: OnGarde is a proxy that scans requests to LLM APIs, blocking credentials, PII, prompt injections, and dangerous shell commands.
AI Bots Overconsume Map Tiles, Disrupting Small Websites
THE GIST: AI bots are excessively consuming map tiles, leading to unexpected costs and service disruptions for small website owners.
Sentinel Protocol: Open-Source AI Firewall for LLM Security
THE GIST: Sentinel Protocol is an open-source local proxy that filters and secures data between applications and LLM APIs, preventing PII leaks and injections.
Ternary AI: A New Era of Computing Beyond Binary Limits
THE GIST: A new ternary AI architecture uses 3-phase AC power for computation, bypassing binary limitations and enabling instantaneous natural language generation.
MVAR: Deterministic Sink Enforcement for AI Agent Security
THE GIST: MVAR offers deterministic policy enforcement at execution sinks to prevent prompt-injection-driven tool misuse in AI agents.
BreakMyAgent: Open-Source Tool for Red-Teaming LLM System Prompts
THE GIST: BreakMyAgent is an open-source sandbox for automated testing of LLM system prompts against exploits.
DeepSeek's DualPath Breaks Bandwidth Bottleneck in LLM Inference
THE GIST: DeepSeek's DualPath system improves LLM inference throughput by optimizing KV-Cache loading in disaggregated architectures.