نوع مقاله : مقاله پژوهشی
موضوعات
عنوان مقاله English
نویسندگان English
Autonomous artificial intelligence (AI) systems challenge core assumptions of criminal law by performing complex actions with minimal human oversight. This paper asks whether such systems can bear the mental element required for criminal liability. Using a comparative method, it first clarifies key concepts such as weak/strong AI, autonomy and legal personhood, and sets out three theories of AI liability: vicarious liability through a human actor, liability based on reasonably foreseeable consequences, and direct liability where the system itself is treated as an offender.in the first model (the perpetration- via another liability model)the AI as an innocent agent such as a child could be used as a vehicle to perpetrate criminal actions.in the second model (the natural-probable-consequence liability model)imposes accountability upon individuals for offences that arise as a natural and foreseeable consequence of their action irrespective of their actual awareness of the offence .at last but not least in the third model theoretically being AI able of self-determination it can have will and knowledge of its specific action in such cases a third scenario approach is necessitated allowing the entity itself to be directly liable of its offences. It then analyses how elements of crime—physical conduct and mens rea—are defined in selected legal systems (Italy, Slovakia, Germany and the United States) and reviews recent legislation and judicial practice. The research draws on statutory analysis, doctrinal scholarship and reported cases. The findings show that no jurisdiction has yet recognised an AI system as an autonomous perpetrator; criminal liability remains anchored in human intent or negligence. While the European Union’s AI Act of 2024 emphasises human oversight, Italy’s 2025 AI Law imposes stricter transparency and harsher penalties for AI assisted offences. The article concludes that, although future technological developments may require doctrinal innovation, current law can address AI related harms through existing concepts of vicarious or negligent liability. Possible sanctions against AI systems are limited to symbolic or functional measures such as deletion of software or suspension of services. The paper advocates continued interdisciplinary collaboration to refine criminal norms as AI evolves.
کلیدواژهها English