Existing research has extensively examined public risk perceptions of artificial intelligence (AI) technologies, with a focus on factors such as technological literacy, socio-cultural context, and trust. However, these studies predominantly employ linear models, presupposing that increased technological literacy directly reduces risk perceptions. This approach overlooks the non-linear relationships and potential mediating and moderating factors between technological literacy and risk perception within the context of complex technologies. Notably, the literature has inadequately explored the mediating role of personal stakes in the relationship between technological literacy and risk perception, as well as how variables such as familiarity with technology, perceived safety, perceived transparency, and respect for scientific authority may moderate this relationship. This study addresses these research gaps by empirically demonstrating that the impact of technological literacy on risk perception is not direct but mediated by personal stakes. Moreover, the study introduces multiple moderating variables, offering a detailed exploration of how these factors modulate the influence of technological literacy on risk perception. Consequently, a more nuanced and accurate cognitive model is constructed. Theoretically, this research challenges traditional linear assumptions and expands the applicability of risk perception theory, particularly within the highly complex context of AI technologies. It elucidates the mechanisms by which technological literacy indirectly influences public risk perceptions through multiple pathways. Practically, the study provides empirical support for strategies aimed at enhancing public understanding and acceptance of AI technologies. It suggests that reducing public risk perceptions can be effectively achieved by increasing personal stakes, enhancing technological transparency and safety, and fostering greater respect for scientific authority, thereby facilitating broader adoption of these technologies.