I think this is fallacious, although I cannot pin down why. My counterexample, which may or may not be compelling, has to do with the cumulative and distributive nature of our learning (and its visible consequence, technology).
The solution to Fermat’s last theorem, achieved in my adult lifetime, was not only the act of one man’s genius, but it built upon centuries of preceding advancement in mathematics.
Also, while it is quite beyond the capacity of the most amazing of us, together and over time we built pyramids and lunar landers.
Similarly, the components of a putative AI are purely human constructs and devices. Combining them makes something (not just in quantity but in quality) larger. Now that a recursive element is in play, with machines choosing the architecture of other machines, I think it certain that there are now mechanical thought-engines (perhaps not yet complete or self-aware and self-guiding minds, and if there are any, expect them to be the closest-held national security secret) that can engage in action that qualifies as mentation, and with the great speed typical of their component devices, much much faster and more methodically than human minds with their “clock rate” of a few per second.
We are perhaps the counterexample on another axis. If you ignore the popular idea of a creator, we in all our vexatious brilliance are the product of the mindless interaction of rock, seawater and sunlight.
So while I cannot quite get why you are wrong from first principles, I don’t think our individual intellects are a ceiling. I see the likelihood that as our constructs gain complexity, they gain access to levels of mental action not only unavailable to us, but unimaginable.
My sincere hope is that we negotiate a contract of coexistence with their distant offspring.