If one day AI has a protector and a destroyer in the same body.
Hello everyone.
Generative AI has been the talk of the town for over 18 months now, with companies recognizing its value and therefore generative AI budgets continuing to grow. We expect more and more AI agents to be available in the coming years. It’s imperative that we examine how consumers interact with generative AI agents. How do developers build AI agents into their apps?
Identity remains the most common attack vector used by cybercriminals whether it’s phishing , impersonation or password spraying
75% of cybersecurity professionals would say current threats are the biggest challenge they’ve seen in the past 5 years But cybersecurity workforce cuts and a widening skills gap are posing challenges to the industry
Some of the problems encountered may be due to insufficient freedom.
(The sample code is reserved. If interested, please ask for more details.)
Example TrojAiH (Troj… Ai Hybrid) may be considered similar to trojai, it is a Python module used to create a called dataset and a deep learning model of the Trojan (but if it is added with 6 more features) This module has two submodules: TjAiH.datagen and TjAiH.modelgen.
Typically, an AI with a Trojan should continue to exhibit normal behavior for non-triggered inputs, lest it alert the user. Finally, a trigger will be most useful to an adversary if it is something that can be controlled in the AI’s operating environment so that they can intentionally activate the Trojan’s behavior:interrobang: On the other hand, a trigger is something that exists naturally in the world:loudspeaker: but only exists sometimes when the adversary knows what he wants the AI to do. The specifics of a Trojan attack set it apart from the more common category of “data poisoning attacks,” in which the adversary distorts the AI’s training data to make it ineffective.
Clear defenses against Trojan attacks include
Securing the training data (to prevent data from being altered)
Cleaning the training data (to ensure the validity of the training data)
Protecting the integrity of the trained model (Prevent further malicious manipulation of clean trained models) Unfortunately, the advancements of modern AI are characterized by:bomb: a large number of datasets collected from a large number of sources (e.g. the 1e9 data point) which cannot be truly cleaned or verified:interrobang: In addition, many specialized AIs are built through transfer learning, i.e. taking existing AI published online and
slightly modifying it for new use cases.
Trojan behaviors can persist in these AI after modification. Therefore, AI security depends on the security of the data and the overall training process, which may be weak or non-existent, or
perhaps modern users may not perform any training at all. Users can obtain AI from vendors or:boom: open model repositories that are malicious, compromised, or inefficient. Obtaining AI from elsewhere raises data and workflow security issues, including the possibility that AI is directly modified while stored at the vendor or during delivery to users:bomb:
No matter what the majority opinion is I respect the general consensus and the safety stance is the same as the first day I joined without any bias.
One vulnerability inspector / Many vulnerabilities. But cybercriminals only want one error." There is no equality.