← 返回阅读

How AI Caused and Fixed My Insomnia

yage.ai//grapeot

Around late March, my sleep took a serious hit. I would often get only two or three hours per night. At first I blamed work stress, but even after quitting my job, the problem persisted. Fortunately, after some analysis and experimentation, I quickly found the root cause and added back an average of one hour and forty minutes to my nightly sleep. This article shares how a heavy AI user like me took an unconventional approach to diagnose an unexpected insomnia trigger (spoiler: it was using AI for intense, multi-threaded thinking late at night), and reflects on which parts of the process AI genuinely helped with and which still depend on human judgment.

An Unconventional Insomnia Diagnosis

Let me walk through how the problem unfolded and got resolved. When the insomnia first started, I did what most people do: I guessed. Too much coffee or alcohol? Not enough melatonin? Eating too late? After two weeks of trial and error with no improvement, I decided to take a more scientific approach.

Specifically, I spent five minutes having AI write an app. It reads HealthKit data from my Apple Watch and iPhone and exports it to my computer—caffeine intake, alcohol consumption, bedtime, sleep stages (REM, deep sleep, etc.), even blood glucose, blood pressure, and weight—essentially everything my wearables track. On top of that, my computer had additional data like what I was doing hour by hour. The point is, my first move wasn't to guess the cause; it was to gather messy but comprehensive information from every available source to fuel the next step.

Step two was feeding all this data to AI and asking it to analyze whether any factors correlated with poor sleep. AI chose to run a multivariate regression, looking for variables with a significant negative correlation with sleep duration. The results were interesting. Some correlations were technically present but trivial—for instance, later bedtimes correlated with shorter sleep, which is a tautology: if I could fall asleep early, I would. Others were surprisingly absent, like caffeine. Neither the number of cups nor the time of day I had coffee showed any meaningful correlation with that night's sleep, probably because I always finish my coffee before noon and it's mostly metabolized by bedtime.

So what variable was most correlated? The time of my last AI usage that evening. If I used AI after dinner—whether for coding, learning, or writing—I almost always slept poorly. The later the usage, the worse the sleep. Conversely, if I avoided AI after dinner and did "unhealthy" things like gaming, watching videos, or chatting on WeChat, I generally slept fine.

The fix was straightforward. I consciously stopped using AI in the evenings, opting instead for mindless scrolling or chatting. After a few weeks, my average sleep increased by one hour and forty minutes, reaching a healthy level. Along the way, I also understood why AI affects sleep. AI handles the grunt work, so what's left for us is usually high-intensity reading and thinking that keeps the mind wired. And when I use AI, I rarely sit idle waiting—I run multiple AI sessions in parallel, cycling through them in a tag-team fashion. There's no downtime. My brain stays in a constant state of tension, high-intensity innovation, and high-intensity work. That state lingers and interferes with sleep.

Why the Non-Obvious Diagnosis, and Where AI Helped

This story may seem straightforward in hindsight, but it still fascinates me. If we had relied on search engines and intuition instead of multivariate analysis, we probably wouldn't have arrived at the right answer. For one, AI is still new enough that neither academic literature nor online articles connect it to insomnia—you wouldn't think to search for it. For another, intuition lumps AI with gaming and video watching. But I wasn't using my phone or watching videos before bed; I was studying with AI on my computer. It's hard to connect that with severe sleep disruption. Without a systematic causal analysis, relying purely on gut feeling or search would have led me down a very long detour.

Even if I had thought of multivariate regression as an approach, it wouldn't have been feasible without my long track record of using AI. Two reasons: data and friction. Health-conscious people know that Apple's HealthKit stores a wealth of information, but like most people, I didn't realize it could be exported—Apple's Health app has no built-in export feature. However, I noticed that third-party apps like Withings could read HealthKit data, so I asked AI whether we could write an app to extract that data to my computer. AI wrote it. With actual data in hand, running multivariate analysis across dozens of days becomes dramatically more efficient than guessing one hypothesis, testing it for a few days, guessing another, testing again—the difference is orders of magnitude.

Another reason I chose the multivariate route is that I knew AI could handle the coding. Writing Python from scratch to clean data, build models, tune parameters, and produce reports isn't impossible for me, but it wouldn't have risen to the top of my priority list early on. I would have tried several wrong approaches first before committing the dev time to the "correct but expensive" method. AI provided execution support that shifted my decision timeline.

One design decision worth calling out: the app I (or rather, AI) wrote isn't for human users—it's for AI. That's completely different from traditional software development. In a conventional app, the architecture would be: I tap a button, it gathers HealthKit data, communicates with a server-side analytics engine, processes the data, and visualizes the results in the app. It would tell me, "Here are the three reasons for your insomnia."

My insomnia analysis was the opposite. The app's user is AI. AI calls the app to fetch data, then runs Python for analysis, then presents the results in its chat interface (not in the iOS app that exported the data). In other words, I manipulate AI through conversation, and AI manipulates software through its interfaces. Throughout the entire process, while we wrote software, none of it had a human as its end user. Even though iOS requires me to tap the screen to start the export, I'm just AI's proxy—if it had hands, it could (and should) tap the screen itself.

What Still Depends on You

Beyond what AI helped with, there's an equally interesting side to this story: what still required human judgment. I think the core factor is comfort level—our intuitive sense of how hard a task is. If I had never done iOS development, never owned a Mac, never compiled and deployed an AI-written iOS app, I probably wouldn't have chosen this path. We rationally know that compiling and deploying an iOS app isn't that hard. But without having done it, there's a psychological barrier. It subtly nudges us away from a direction even when we know it's the optimal solution.

I've written about this before: cost structure determines optimal strategy. In that article, I argued that from first principles, the correct way to debug is to instrument your code with logs, expose internal state, and reason from those logs. But reading tens of thousands of lines of logs in a short time is impossible for a human, so we abandon the correct but impractical path and guess where the bug is instead. We even rebrand this necessity as "engineering intuition" and celebrate it. With AI, however, reading through thousands of lines of logs to find the anomaly is trivial. The cost structure shifts, and the optimal debugging strategy shifts with it.

The same logic applies here. In the past, data modeling felt like hard work and writing an iOS app felt like climbing a mountain. That's why we instinctively reach for web searches and guesswork when diagnosing insomnia. But in 2026, I happen to wear an Apple Watch, giving me a rich sleep dataset. I already track my coffee, alcohol, and melatonin intake daily. And I know AI can handle the modeling effortlessly. These three factors together make multivariate regression a low-cost option in this specific scenario, steering me toward it.

In other words, our accumulated skills, past projects, experiences, and subconscious comfort levels constitute our internal model of the world's cost structure. That model silently determines how we approach each problem and what solutions we consider. AI can objectively change this cost structure by making execution cheaper. But knowing something rationally and feeling it viscerally are completely different. Only by actually doing something, experiencing it firsthand, can we internalize this cost shift into our decision-making. That step AI cannot take for us. We have to iterate, analyze, and internalize it ourselves.

Among all the comfort levels we build, comfort with AI is especially important. Take building the iOS app. A year or two ago, I would have sat down at my computer, fired up Cursor, and treated it as a software engineering project. I'd write a design doc, treat AI like a junior engineer with very specific instructions, then code review its output. The whole thing would take two or three hours. I suspect this is still how most people use AI for coding—and it's certainly many times faster than before AI. But if writing the app takes hours, I wouldn't have gone down that path either.

What I actually did was describe the app I wanted to AI with my voice on my phone—about a minute or two of talking—then went about my day. AI wrote and tested it. When it was ready, I went to my Mac and clicked Run. There was a bug. I pasted the log back to AI. It fixed it. I spot-checked the exported data against HealthKit to confirm it was correct. Done. Total dev time: about five minutes. In other words, even when we know intellectually that AI should write the code, the difference in comfort level with letting AI work independently leads to orders of magnitude in efficiency and completely different technical decisions.

One more subtle detail: before writing the app, I wasn't sure whether HealthKit data could be exported. But I had observed that apps like Withings could read HealthKit, so I asked AI about it. That kind of observational ability is hard for AI to replace anytime soon. In practice, when you ask AI a question, it often gives a conservative answer: "can't be done." But if you provide an observation—"well, how does that other app do it?"—AI can research more effectively and arrive at the right technical direction.

Closing Thoughts

All in all, this is a story I'm both happy and unhappy about. I'm happy that the sleep issue is resolved and, more importantly, resolved through a principled approach—the right method applied cleanly—rather than random trial and error that happens to work. AI helped not just with execution but also with reshaping my comfort level across many domains, letting me converge on principled methods in more scenarios. The unhappy side? Without AI, this problem wouldn't have existed in the first place, and I wouldn't have needed to debug my insomnia at all😂. But on balance, it clarified a lot of things, so it was worth it.

tech