Yesterday’s outage showed how dependent the modern web is on a handful of core infrastructure providers. In fact, it’s so dependent that a single configuration error made large parts of the internet totally unreachable for several hours. Many of us work in crypto because we understand the dangers of centralization in finance, but the events […] The post How a single computer file accidentally took down 20% of the internet on Tuesday – in plain English appeared first on CryptoSlate.Yesterday’s outage showed how dependent the modern web is on a handful of core infrastructure providers. In fact, it’s so dependent that a single configuration error made large parts of the internet totally unreachable for several hours. Many of us work in crypto because we understand the dangers of centralization in finance, but the events […] The post How a single computer file accidentally took down 20% of the internet on Tuesday – in plain English appeared first on CryptoSlate.

How a single computer file accidentally took down 20% of the internet on Tuesday – in plain English

Yesterday’s outage showed how dependent the modern web is on a handful of core infrastructure providers.

In fact, it’s so dependent that a single configuration error made large parts of the internet totally unreachable for several hours.

Many of us work in crypto because we understand the dangers of centralization in finance, but the events of yesterday were a clear reminder that centralization at the internet’s core is just as urgent a problem to solve.

The obvious giants like Amazon, Google, and Microsoft run enormous chunks of cloud infrastructure.

But equally critical are firms like Cloudflare, Fastly, Akamai, DigitalOcean, and CDN (servers that deliver websites faster around the world) or DNS (the “address book” of the internet) providers such as UltraDNS and Dyn.

Most people barely know their names, yet their outages can be just as crippling, as we saw yesterday.

To start with, here’s a list of companies you may never have heard of that are critical to keeping the internet running as expected.

CategoryCompanyWhat They ControlImpact If They Go Down
Core Infra (DNS/CDN/DDoS)CloudflareCDN, DNS, DDoS protection, Zero Trust, WorkersHuge portions of global web traffic fail; thousands of sites become unreachable.
Core Infra (CDN)AkamaiEnterprise CDN for banks, logins, commerceMajor enterprise services, banks, and login systems break.
Core Infra (CDN)FastlyCDN, edge computeGlobal outage potential (as seen in 2021: Reddit, Shopify, gov.uk, NYT).
Cloud ProviderAWSCompute, hosting, storage, APIsSaaS apps, streaming platforms, fintech, and IoT networks fail.
Cloud ProviderGoogle CloudYouTube, Gmail, enterprise backendsMassive disruption across Google services and dependent apps.
Cloud ProviderMicrosoft AzureEnterprise & government cloudsOffice365, Teams, Outlook, and Xbox Live outages.
DNS InfrastructureVerisign.com & .net TLDs, root DNSCatastrophic global routing failures for large parts of the web.
DNS ProvidersGoDaddy / Cloudflare / SquarespaceDNS management for millions of domainsEntire companies vanish from the internet.
Certificate AuthorityLet’s EncryptTLS certificates for most of the webHTTPS breaks globally; users see security errors everywhere.
Certificate AuthorityDigiCert / GlobalSignEnterprise SSLLarge corporate sites lose HTTPS trust.
Security / CDNImpervaDDoS, WAF, CDNProtected sites become inaccessible or vulnerable.
Load BalancersF5 NetworksEnterprise load balancingBanking, hospitals, and government services can fail nationwide.
Tier-1 BackboneLumen (Level 3)Global internet backboneRouting issues cause global latency spikes and regional outages.
Tier-1 BackboneCogent / Zayo / TeliaTransit and peeringRegional or country-level internet disruptions.
App DistributionApple App StoreiOS app updates & installsiOS app ecosystem effectively freezes.
App DistributionGoogle Play StoreAndroid app distributionAndroid apps cannot install or update globally.
PaymentsStripeWeb payments infrastructureThousands of apps lose the ability to accept payments.
Identity / LoginAuth0 / OktaAuthentication & SSOLogins break for thousands of apps.
CommunicationsTwilio2FA SMS, OTP, messagingLarge portion of global 2FA and OTP codes fail.

What happened yesterday

Yesterday’s culprit was Cloudflare, a company that routes almost 20% of all web traffic.

It now says the outage started with a small database configuration change that accidentally caused a bot-detection file to include duplicate items.

That file suddenly grew beyond a strict size limit. When Cloudflare’s servers tried to load it, they failed, and many websites that use Cloudflare began returning HTTP 5xx errors (error codes users see when a server breaks).

Here’s the simple chain:

Chain of eventsChain of events

A Small Database Tweak Sets Off a Big Chain Reaction.

The trouble began at 11:05 UTC when a permissions update made the system pull extra, duplicate information while building the file used to score bots.

That file normally includes about sixty items. The duplicates pushed it past a hard cap of 200. When machines across the network loaded the oversized file, the bot component failed to start, and the servers returned errors.

According to Cloudflare, both the current and older server paths were affected. One returned 5xx errors. The other assigned a bot score of zero, which could have falsely flagged traffic for customers who block based on bot score (Cloudflare’s bot vs. human detection).

Diagnosis was tricky because the bad file was rebuilt every five minutes from a database cluster being updated piece by piece.

If the system pulled from an updated piece, the file was bad. If not, it was good. The network would recover, then fail again, as versions switched.

According to Cloudflare, this on-off pattern initially looked like a possible DDoS, especially since a third-party status page also failed around the same time. Focus shifted once teams linked errors to the bot-detection configuration.

By 13:05 UTC, Cloudflare applied a bypass for Workers KV (login checks) and Cloudflare Access (authentication system), routing around the failing behavior to cut impact.

The main fix came when teams stopped generating and distributing new bot files, pushed a known good file, and restarted core servers.

Cloudflare says core traffic began flowing by 14:30, and all downstream services recovered by 17:06.

The failure highlights some design tradeoffs.

Cloudflare’s systems enforce strict limits to keep performance predictable. That helps avoid runaway resource use, but it also means a malformed internal file can trigger a hard stop instead of a graceful fallback.

Because bot detection sits on the main path for many services, one module’s failure cascaded into the CDN, security features, Turnstile (CAPTCHA alternative), Workers KV, Access, and dashboard logins. Cloudflare also noted extra latency as debugging tools consumed CPU while adding context to errors.

On the database side, a narrow permissions tweak had wide effects.

The change made the system “see” more tables than before. The job that builds the bot-detection file did not filter tightly enough, so it grabbed duplicate column names and expanded the file beyond the 200-item cap.

The loading error then triggered server failures and 5xx responses on affected paths.

Impact varied by product. Core CDN and security services threw server errors.

Workers KV saw elevated 5xx rates because requests to its gateway passed through the failing path. Cloudflare Access had authentication failures until the 13:05 bypass, and dashboard logins broke when Turnstile could not load.

Cloudflare Email Security temporarily lost an IP reputation source, reducing spam detection accuracy for a period, though the company said there was no critical customer impact. After the good file was restored, a backlog of login attempts briefly strained internal APIs before normalizing.

The timeline is straightforward.

The database change landed at 11:05 UTC. First customer-facing errors appeared around 11:20–11:28.

Teams opened an incident at 11:35, applied the Workers KV and Access bypass at 13:05, stopped creating and spreading new files around 14:24, pushed a known good file and saw global recovery by 14:30, and marked full restoration at 17:06.

According to Cloudflare, automated tests flagged anomalies at 11:31, and manual investigation began at 11:32, which explains the pivot from suspected attack to configuration rollback within two hours.

Time (UTC)StatusAction or Impact
11:05Change deployedDatabase permissions update led to duplicate entries
11:20–11:28Impact startsHTTP 5xx surge as the bot file exceeds the 200-item limit
13:05MitigationBypass for Workers KV and Access reduces error surface
13:37–14:24Rollback prepStop bad file propagation, validate known good file
14:30Core recoveryGood file deployed, core traffic routes normally
17:06ResolvedDownstream services fully restored

The numbers explain both cause and containment.

A five-minute rebuild cycle repeatedly reintroduced bad files as different database pieces updated.

A 200-item cap protects memory use, and a typical count near sixty left comfortable headroom, until the duplicate entries arrived.

The cap worked as designed, but the lack of a tolerant “safe load” for internal files turned a bad config into a crash instead of a soft failure with a fallback model. According to Cloudflare, that’s a key area to harden.

Cloudflare says it will harden how internal configuration is validated, add more global kill switches for feature pipelines, stop error reporting from consuming large CPU during incidents, review error handling across modules, and improve how configuration is distributed.

The company called this its worst incident since 2019 and apologized for the impact. According to Cloudflare, there was no attack; recovery came from halting the bad file, restoring a known good file, and restarting server processes.

The post How a single computer file accidentally took down 20% of the internet on Tuesday – in plain English appeared first on CryptoSlate.

Piyasa Fırsatı
Core DAO Logosu
Core DAO Fiyatı(CORE)
$0.1409
$0.1409$0.1409
-2.62%
USD
Core DAO (CORE) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen service@support.mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Moto completes $1.8 million pre-seed funding round for its Solana eco-credit card project.

Moto completes $1.8 million pre-seed funding round for its Solana eco-credit card project.

PANews reported on December 17th that Moto, an on-chain credit card project, announced the completion of a $1.8 million Pre-Seed funding round, led by Eterna Capital
Paylaş
PANews2025/12/17 22:15
Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse?

Whales offload 200 million XRP leaving market uncertainty behind. XRP faces potential collapse as whales drive major price shifts. Is XRP’s future in danger after massive sell-off by whales? XRP’s price has been under intense pressure recently as whales reportedly offloaded a staggering 200 million XRP over the past two weeks. This massive sell-off has raised alarms across the cryptocurrency community, as many wonder if the market is on the brink of collapse or just undergoing a temporary correction. According to crypto analyst Ali (@ali_charts), this surge in whale activity correlates directly with the price fluctuations seen in the past few weeks. XRP experienced a sharp spike in late July and early August, but the price quickly reversed as whales began to sell their holdings in large quantities. The increased volume during this period highlights the intensity of the sell-off, leaving many traders to question the future of XRP’s value. Whales have offloaded around 200 million $XRP in the last two weeks! pic.twitter.com/MiSQPpDwZM — Ali (@ali_charts) September 17, 2025 Also Read: Shiba Inu’s Price Is at a Tipping Point: Will It Break or Crash Soon? Can XRP Recover or Is a Bigger Decline Ahead? As the market absorbs the effects of the whale offload, technical indicators suggest that XRP may be facing a period of consolidation. The Relative Strength Index (RSI), currently sitting at 53.05, signals a neutral market stance, indicating that XRP could move in either direction. This leaves traders uncertain whether the XRP will break above its current resistance levels or continue to fall as more whales sell off their holdings. Source: Tradingview Additionally, the Bollinger Bands, suggest that XRP is nearing the upper limits of its range. This often points to a potential slowdown or pullback in price, further raising concerns about the future direction of the XRP. With the price currently around $3.02, many are questioning whether XRP can regain its footing or if it will continue to decline. The Aftermath of Whale Activity: Is XRP’s Future in Danger? Despite the large sell-off, XRP is not yet showing signs of total collapse. However, the market remains fragile, and the price is likely to remain volatile in the coming days. With whales continuing to influence price movements, many investors are watching closely to see if this trend will reverse or intensify. The coming weeks will be critical for determining whether XRP can stabilize or face further declines. The combination of whale offloading and technical indicators suggest that XRP’s price is at a crossroads. Traders and investors alike are waiting for clear signals to determine if the XRP will bounce back or continue its downward trajectory. Also Read: Metaplanet’s Bold Move: $15M U.S. Subsidiary to Supercharge Bitcoin Strategy The post Whales Dump 200 Million XRP in Just 2 Weeks – Is XRP’s Price on the Verge of Collapse? appeared first on 36Crypto.
Paylaş
Coinstats2025/09/17 23:42
Theta Labs faces lawsuits over CEO’s alleged insider token manipulation

Theta Labs faces lawsuits over CEO’s alleged insider token manipulation

The post Theta Labs faces lawsuits over CEO’s alleged insider token manipulation appeared on BitcoinEthereumNews.com. Theta Labs has been sued by two former senior
Paylaş
BitcoinEthereumNews2025/12/17 22:03