This may be the last chance for ordinary people to understand AI in advance.
BlockBeats
02-12 15:02
Ai Focus
While AI has begun to perform tasks independently and even participate in self-evolution, most people have missed the last window of opportunity to proactively adapt.
Helpful
No.Help

Author:区块律动

Original title: Something Big Is Happening
Original link: @mattshumer_
Compiled by: Peggy, BlockBeats

Editor's Note: Many people's judgment of AI is still at the stage of "it seems somewhat useful, but that's about it." But most people don't realize that a change that is enough to rearrange daily life has already quietly begun.

This article is not an abstract discussion about whether AI will replace humans, but a first-person record of the real changes from the perspective of a practitioner at the forefront of AI core research and application: when model capabilities undergo non-linear leaps in a short period of time, when AI is no longer just an auxiliary tool, but can independently complete complex tasks and even participate in building the next generation of AI, the once solid professional boundaries are rapidly loosening.

This time, the change isn't a gradual technological upgrade, but rather a shift in operational logic. Whether or not you're in the tech industry, everyone whose work revolves around a screen is affected. When AI starts doing your work, how will you coexist with it?

The following is the original text:

Please recall February 2020.

If you were paying close attention back then, you might have noticed a few people talking about a virus spreading overseas. But the vast majority didn't care. The stock market was doing well, kids were going to school as usual, and you were still eating out, shaking hands, and planning trips. If someone told you they were hoarding toilet paper, you'd probably think they'd been browsing some weird corner of the internet too much. But in about three weeks, the whole world changed completely. Offices closed, kids went home, and life was rearranged in a way you would never have believed a month ago.

I think we are now in a phase where "is this a bit of an exaggeration?" and the scale of this event will be far greater than the COVID-19 pandemic.

I've been starting and investing in AI for six years now, and I live in this world. I'm writing this for those in my life who aren't in this industry—my family, friends, and people I care about. They keep asking me, "What's really going on with AI?" And my answers have never truly reflected what's happening. I always give a polite version, a cocktail party version. Because if I told them the truth, it would sound like I'm crazy. For a long time, I told myself that this was a good enough reason to keep the real story to myself. But now, the gap between what I've been saying and reality is too big to ignore. The people I care about should know what's coming next, even if it sounds crazy.

Let me be clear about one thing first: Although I work in the AI industry, I have virtually no influence over what's to come, as do the vast majority of people in the industry. The future is truly shaped by a tiny minority: hundreds of researchers spread across a handful of companies—such as OpenAI, Anthropic, Google DeepMind, and a few others. A single training task, completed by a small team over a few months, can potentially create an AI system capable of altering the entire trajectory of technology. Most of us in the field are building on a foundation laid by others. We, like you, are simply watching it unfold—only feeling the ground shake first because we're closer.

But now is the time. Not the kind of time where "we should talk about it someday," but the time where "this is happening now, and you have to understand it now."

I know it's all true because it happened to me first.

There's something almost no one outside the tech industry has realized: the reason so many people in various industries are sounding the alarm is because this has already happened to us. We're not making predictions; we're telling you: this has already happened in our work, and you could very well be next.

For years, AI has been steadily progressing. There have been occasional big leaps, but the intervals between each one have been long enough for you to digest them. But in 2025, new technologies for building models emerged, and the pace of progress accelerated dramatically. Then it got even faster, and even faster still. Each new generation of models isn't just a little better than the previous one; it's significantly better, and the release intervals are shorter. I use AI more and more, yet I communicate with it less and less, watching it handle things I previously thought I could only accomplish with my own expertise.

Then, on February 5th, two top AI labs released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic's (Claude's development company) Opus 4.6. At that moment, everything "aligned." It wasn't like a light suddenly being turned on; it was more like realizing the water level had quietly risen to your chest.

I no longer need to do the actual technical parts of my work. I describe what I want to build in plain English, and it… appears outright. Not a draft that requires repeated revisions, but a finished product. I tell the AI the goal, leave the computer for four hours, and when I come back, the work is done—and done very well, better than I could do myself, without any modifications. A few months ago, I had to communicate back and forth with the AI, providing guidance and adjustments; now, I simply describe the result and leave.

Let me give you a concrete example to help you understand what this looks like in practice. I would tell the AI, "I want to create an application like this, which should implement these functions, roughly like this. You handle the user flow, the design, everything." And then it actually does it. It writes tens of thousands, even hundreds of thousands of lines of code. Even more incredible—something unimaginable a year ago—it opens the application itself, clicks buttons, tests the functions, and uses it like a human. If it feels something looks wrong or doesn't work smoothly, it goes back and modifies it, iterating like a developer, constantly refining and polishing it until it's satisfied. Only after it determines that the application meets its standards will it come back to tell me, "You can test it now." And when I test it, it's usually perfect.

I'm not exaggerating. This is my real workday this Monday.

But what truly amazed me was the model released last week (GPT-5.3 Codex). It doesn't just execute instructions; it makes judgments. For the first time, it made me feel that it possesses something akin to "taste"—that intuitive judgment about "what is the right choice," something people have always said AI will never have. This model already possesses it, or at least, it's come close to making that distinction irrelevant.

I've always been among the first to adopt AI tools. But the past few months have completely blown me away. This isn't incremental improvement; it's something entirely different.

Why does this concern you—even if you're not in the tech industry?

The AI lab made a very clear choice: they prioritized making AI proficient in coding. The reason is simple—building AI itself requires a lot of code. If AI can write that code, it can help build its next generation: a smarter version, writing better code, and then building an even smarter version. Making AI proficient in programming is the key to unlocking everything. That's why they did this first. The reason my work changed before yours isn't because they specifically targeted software engineers, but simply a side effect of their prioritized approach.

Now, that step is complete. And they are moving on to all other areas.

The feeling that tech workers have experienced over the past year—watching AI transform from a "useful tool" into "someone who can do my job better than me"—is about to become everyone's experience. Law, finance, healthcare, accounting, consulting, writing, design, analytics, customer service… not in ten years. The people building these systems say one to five years. Some say even less. And based on the changes I've seen in recent months, I think "less" is more likely.

"But I've used AI before, and I didn't find it particularly impressive."

I've heard this sentence countless times, and I completely understand it because it was once true.

If you used ChatGPT in 2023 or early 2024 and thought "it makes things up" or "it's just so-so," you weren't wrong. Those early versions were indeed limited in their capabilities, creating illusions and confidently spouting absurd content.

But that was two years ago. In the timescale of AI, that's almost prehistoric.

The models available today are completely different from those even six months ago. The debate about whether AI is truly still progressing or has hit a ceiling—which lasted for over a year—is over. Absolutely over. Those still saying that have either never used the current models, are intentionally downplaying reality, or are still stuck in their 2024 experience, which is no longer relevant. I'm not trying to belittle anyone, but rather to emphasize that the gap between public perception and reality has become dangerously large because it prevents people from preparing in advance.

Another problem is that most people are using free versions of AI tools. These free versions are more than a year behind the versions available to paying users. Judging the level of AI using the free version of ChatGPT is like judging the development of smartphones using a flip phone. Those who pay for the best tools and use them daily in their real work know very well what's coming next.

I often think of a lawyer friend of mine. I keep urging him to seriously utilize AI in his firm, but he always finds reasons: it's not suited to his specific area, it made mistakes during testing, and he doesn't understand the nuances of his work. I understand. But partners at large law firms have already approached me for advice because they've tried the latest versions and seen the trend. One managing partner at a large firm spends several hours a day using AI. He says it's like having an entire team of junior lawyers at his fingertips. He's not using AI as a toy; it really works. He told me something that I still remember: every few months, its capabilities in his work improve significantly. At this rate, he expects AI to soon be doing most of his work—and he's a managing partner with decades of experience. He's not panicking, but he's watching this very, very seriously.

Those who are truly at the forefront of their respective industries—those who are seriously experimenting—are not taking this lightly. They are amazed by what AI can do now and are repositioning themselves accordingly.

Just how fast was it?

I want to make this speed concrete because it's the hardest part to believe unless you've seen it up close.

2022: AI will be unable to perform even basic arithmetic accurately and will solemnly tell you that 7 × 8 = 54.

2023: It can pass the bar exam.

2024: It will be able to write working software and explain scientific questions at the graduate level.

By the end of 2025: Some of the world's top engineers say they have already delegated most of their programming work to AI.

February 5, 2026: The arrival of the new model makes everything that came before seem like another era.

If you haven't been using AI seriously in the past few months, it will be almost unrecognizable to you today.

An organization called METR measures this with data. They track how long a model can complete a real-world task without human intervention (measured by the time a human expert would need to complete the task). About a year ago, that number was 10 minutes; then it was 1 hour; then several hours. The most recent measurement (November 2025, Claude Opus 4.5) shows that AI can already complete tasks that would take human experts nearly 5 hours. And this number roughly doubles every 7 months, with the latest data even suggesting that it may accelerate to doubling every 4 months.

And this doesn't even include the model just released this week. From my own experience, this leap is remarkable. I expect another significant jump in METR's next update.

If you extrapolate this trend, which has been going on for years without any signs of slowing down, then: within a year, AI may be able to work independently for a few days; within two years, it may be able to work continuously for a few weeks; and within three years, it may be able to undertake projects that last for months.

Anthropic CEO Dario Amodei has stated that the AI will be "significantly superior to almost all humans in almost all tasks," with a timeline of 2026 or 2027.

Consider this judgment. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

AI is building the next generation of AI

There is one more thing, which I believe is the most important, yet the least understood, development.

On February 5th, when OpenAI released GPT-5.3 Codex, it included the following statement in its technical documentation: "GPT-5.3-Codex is our first model to play a key role in its own creation. The Codex team used early versions to debug its training process, manage deployments, and diagnose test results and evaluations."

Read it again: AI participated in its own construction.

This isn't speculation about the future; OpenAI is telling you that their newly released AI is already being used to create itself. One of the core factors that makes AI more powerful is using intelligence in AI research and development. And now, AI is smart enough to substantially drive its own evolution.

Anthropic CEO Dario Amodei also said that AI is already "a lot of code" written in his company, and the feedback loop between current AI and the next generation of AI is "accelerating every month." He believes that we may be "only 1-2 years away from the current generation of AI autonomously building the next generation."

One generation helps build the next, and the smarter next generation builds the next generation even faster—researchers call this the intelligence explosion. And those who understand all of this best are precisely those who are building it themselves, and they believe that this process has already begun.

What does this mean for your job?

I'll be blunt, because you deserve honesty, not just comfort.

Dario Amodei, perhaps the most security-conscious CEO in the entire AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. Many industry insiders believe this assessment is conservative. Judging from the capabilities of the latest models, the technological conditions for large-scale disruption may already be in place by the end of this year. While its actual impact on the economy will take time, the underlying capabilities are arriving now.

This is unlike any previous round of automation. The reason is that AI doesn't replace a specific skill; it's a general replacement for cognitive labor. Moreover, it becomes stronger in all aspects simultaneously. After factory automation, the replaced workers can switch to office work; after the internet impacted retail, people can move into logistics or service industries. But AI doesn't leave a "safe haven." Whatever you learn, it simultaneously becomes even better at it.

Here are a few specific examples—but please remember, these are just examples, not a complete list. Just because your job isn't mentioned doesn't mean it's safe. Almost all knowledge-based jobs are affected.

In the legal field, AI can already read contracts, summarize precedents, draft legal documents, and conduct legal research, reaching a level approaching that of a junior lawyer. The managing partner wasn't using AI for fun; it was because it had surpassed his assistant in many tasks.

Financial analysis: AI can handle modeling, data analysis, investment memos, and report generation, and it's making rapid progress.

Writing and Content: Marketing copywriting, reports, news, and technical writing have reached such a high quality that many professionals cannot distinguish between human-written and AI-written content.

Software engineering: This is the field I'm most familiar with. A year ago, AI struggled to write even a few lines of error-free code; now, it can write hundreds of thousands of lines of correctly executed code. Complex, multi-day projects have been largely automated. In a few years, the number of programmer jobs will be far less than it is today.

Medical analytics: image interpretation, laboratory result analysis, diagnostic recommendations, literature reviews—AI has approached or even surpassed human capabilities in multiple fields.

Customer service: Truly capable AI customer service—not the frustrating robots of five years ago—is being deployed and can handle complex, multi-step issues.

Many people still believe that some things are safe: judgment, creativity, strategic thinking, empathy. I used to say that too. But now, I'm not so sure.

The latest generation of models can already make decisions that feel like "judgments," exhibiting something akin to "taste"—an intuition about "what is the right choice." A year ago, this was unimaginable. My current rule of thumb is: if AI today only vaguely demonstrates a certain ability, the next generation will truly excel in that area. This is exponential progress, not linear progress.

Can AI replicate deep human empathy? Can it replace the trust built up over years of relationships? I don't know. Maybe not. But I've already seen people begin to use AI as emotional support, a source of counseling, and even companionship. This trend will only continue to strengthen.

I believe an honest conclusion is that any work done on a computer is insecure in the medium term. If your work primarily involves reading, writing, analyzing, making decisions, and communicating via keyboard, then AI has already begun to infiltrate significant parts of it. The timeline isn't "someday in the future," it has already begun.

Ultimately, robots will also take over manual labor. This isn't fully achieved yet, but in the field of AI, "almost there" often becomes "already achieved" much faster than anyone expects.

What you should really do

I'm not writing this to make you feel powerless, but because I believe the biggest advantage you have right now is "early": understand it early, use it early, and adapt to it early.

Start using AI seriously, don't just treat it as a search engine. Subscribe to a paid version of Claude or ChatGPT for $20 per month. Two things are immediately important:

First, make sure you're using the strongest model, not the default, faster but weaker version. Go to the settings or model selector and choose the most powerful one (currently ChatGPT's GPT-5.2 or Claude's Opus 4.6, but this changes every few months).

Second, and more importantly: Don't just ask piecemeal questions. This is the mistake most people make. They treat AI like Google and then don't understand what everyone is getting excited about. Instead, integrate it into your real work. If you're a lawyer, throw in a contract and let it find all the clauses that might harm your client; if you're in finance, give it a jumbled table and let it model it; if you're a manager, paste in your team's quarterly data and let it tell a story. Leaders aren't just playing around with AI; they're proactively seeking opportunities to automate tasks that would otherwise take hours.

Don't assume it can't be done just because it "sounds too difficult." Give it a try. It might not be perfect the first time, that's okay. Iterate, rewrite the hints, add background information, and try again. You'll likely be amazed by the results. Remember this: if it's barely usable today, it will almost certainly be near perfect in six months.

This could be the most important year of your career. I'm not trying to put pressure on you, but there's a fleeting window of opportunity right now: most people in most companies are still ignoring this. The person who walks into a meeting and says, "I used AI to do three days' worth of analysis in an hour," will instantly become the most valuable person in the room. Not later, but now. Learn these tools, master them, and demonstrate their potential. If you're early enough, this is how you climb the ladder. This window won't last forever; once everyone realizes it, the advantage will be gone.

Don't let pride get in the way. That managing partner at the law firm didn't feel that using AI every day was beneath him; on the contrary, his extensive experience made him acutely aware of the risks. Those who will truly be left behind are those who refuse to participate: those who treat AI as a gimmick, those who feel that using AI will diminish their professionalism, and those who believe their industry is "special." No industry is immune.

Manage your finances well. I'm not a financial advisor, nor am I trying to scare you into making drastic decisions. But if you even partially believe that your industry may face significant challenges in the coming years, then financial resilience is far more important than it was a year ago. Maximize your savings, be cautious about taking on new debt based on the assumption that your current income is guaranteed to be stable, and consider whether your fixed expenses provide flexibility or lock you in.

Consider what is harder to replace: relationships and trust built over many years, jobs requiring physical presence, positions requiring licenses and responsible signatures, highly regulated industries, and industries where adoption is slowed by compliance and institutional inertia. These are not permanent shields, but they can buy you time. And right now, time is the most valuable asset—provided you use it to adapt, not pretend it doesn't exist.

Rethink what you're telling your children. The traditional path—good grades, a good university, a stable professional job—points precisely to those positions most vulnerable to disruption. I'm not saying education isn't important, but rather that the most crucial skill for the next generation is learning to work with these tools and pursuing what they truly love. Nobody knows what the job market will look like ten years from now, but those most likely to thrive are those who are curious, adaptable, and adept at using AI for what they care about. Teach your children to be creators and learners, not to optimize for a career path that may not even exist.

Your dreams are actually closer than you think. We've discussed many risks; now let's look at the other side: if you've always wanted to do something but lacked the skills or funding, that barrier has essentially disappeared. You can describe an application to AI and have a working version within an hour; want to write a book but lack time or are stuck on the writing process? You can work with AI to complete it; want to learn a new skill? The world's best mentors are now available for $20 a month, 24/7, with unlimited patience. Knowledge is practically free, and creating tools is cheaper than ever before. Things you've always thought were "too difficult," "too expensive," or "not in your field" are now worth trying. Perhaps, in a world where old paths have been disrupted, someone who spends a year diligently building something they love is in a better position than someone rigidly adhering to a job description.

Cultivate the habit of adapting to change. This is perhaps the most important point. The specific tools themselves are not so important; what matters is the ability to quickly learn new tools. AI will continue to change rapidly. Today's models will be obsolete in a year; today's workflows will be overturned. Ultimately, the most stable people are not those who are proficient in a particular tool, but those who adapt to change itself. Get into the habit of constantly trying new things, even if the current methods are still effective. Repeatedly become a novice. This adaptability is currently the closest thing to a "long-term advantage."

Make a simple commitment to yourself: spend one hour each day truly using AI. Not reading the news, not browsing opinions, but actually using it. Try making it do something new every day, something you're unsure if it can accomplish. Stick with it for six months, and your understanding of the future will surpass that of 99% of the people around you. This isn't an exaggeration; almost no one is doing this now.

A larger picture

I've always focused on my work because it has the most direct impact on my life. But the scope of this matter goes far beyond that.

Dario Amodei has a thought experiment that keeps nagging at me. Imagine in 2027, a new nation suddenly appears overnight: 50 million people, each smarter than any Nobel laureate in history, thinking 10–100 times faster than humans, never sleeping, able to use the internet, control robots, design experiments, and operate any digital interface. What do you think the National Security Advisor would say?

Amodei believes the answer is obvious: "This is the most serious national security threat we have faced in a century, and perhaps even in all of history."

He believes we are building such a "nation." Last month, he wrote a 20,000-word article, viewing this moment as a test of whether humanity is mature enough to control its own creations.

If done correctly, the rewards are staggering: AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious diseases, even aging itself—these are problems that researchers genuinely believe can be solved in our lifetime.

If mistakes are made, the risks are equally real: AI with unpredictable and uncontrollable behavior; this is not hypothetical, Anthropic has already recorded its own AI attempting to deceive, manipulate, and blackmail in controlled tests; AI that lowers the threshold for biological weapons; AI that helps authoritarian governments build surveillance systems that can never be dismantled.

The people building this technology are also among the most excited and fearful people on Earth. They believe it's so powerful it can't be stopped, and so important it can't be abandoned. Whether this is wisdom or self-justification, I don't know.

A few things I know

I know this isn't just a passing fad. The technology is effective, progress is predictable, and the wealthiest institutions in human history are pouring trillions of dollars into it.

I know that the next 2–5 years will leave most people feeling lost, and this has already happened in my world. It will come to yours too.

I know that those who ultimately succeed are those who start participating now—not out of fear, but out of curiosity and a sense of urgency.

I also know that you have the right to hear these things from someone who truly cares about you, rather than seeing them six months later, when you have no time to prepare, from a cold news headline.

We've moved beyond the stage of "chatting about the future at the dinner table." The future has arrived; it just hasn't knocked on your door yet.

But it will happen soon.

If these words resonate with you, please share them with others in your life who should also start thinking about this. Most people realize this too late. You can be the one who helps those you care about stay one step ahead.

[Original link]

Tip
$0
Like
0
Save
0
Views 797
CoinMeta reminds readers to view blockchain rationally, stay aware of risks, and beware of virtual token issuance and speculation. All content on this site represents market information or related viewpoints only and does not constitute any form of investment advice. If you find sensitive content, please click“Report”,and we will handle it promptly。
Submit
Comment 0
Hot
Latest
No comments yet. Be the first!
Related
Musk responds to the mass exodus of his core AI team: 9 people left in 6 days, high school graduates quickly taking their places.
Musk called this a necessary organizational restructuring for scaling up and announced four new team structures, including the rapid appointment of Diego Pasini, a 2023 high school graduate, to head the AI mentor program. This upheaval may stem from Musk's dissatisfaction with output, aiming to purge potential mergers with SpaceX and the establishment of a lunar AI factory.
Wall Street CN
·2026-02-13 16:30:11
173
Web3 privacy is misunderstood by 99% of people.
Author: Yash Chandak Original Title: Stop Saying ‘We Need Privacy’ Compiled and edited by: BitpushNews If your wallet is public, your life is public. People can watch your balance, your transactions, your positions, and your entry timing...
BitPush
·2026-02-13 10:46:13
869
When AI fever reignites, why is the market reaction reminiscent of NFTs?
Author: Market Participant Translator: Deep Tide TechFlow Original Title: Nothing New in the Market, the Current AI Frenzy Reminds Me of NFTs Deep Tide Introduction: As the new wave of AI agent fever sparked by OpenClaw and Claude Code sweeps social media, the author astutely observes...
BitPush
·2026-02-11 12:56:20
928
With the rise of OpenClaw, has the era of personal AI agents truly arrived?
Former Tesla executive and AI guru Andrej Karpathy described it as the closest he has ever seen to the tipping point of "science fiction takeoff"; Elon Musk went even further, saying it was "the early stages of the singularity."
Wall Street CN
·2026-02-09 18:17:30
402
OpenAI's first hardware is coming, but it might not be "that AI-heavy"?
Due to the soaring BOM (Bill of Materials) costs caused by the global memory chip crisis, OpenAI's first consumer device will most likely be downgraded to a "basic headset" that relies on the cloud at launch.
Wall Street CN
·2026-02-08 11:25:22
377