Limits on the future of Artificial Intelligence
The cost of being wrong or taking questionable actions are sometimes given the same “weight” as a normal action, essentially treating it all the same. This worries me because humans have a very conscious side of the brain, where they put a lot of thought on the implications before acting. However, even in “narrow AI” basic cases, e.g., even around basic mechanisms of detection and aiding what to do next, we don’t instrument this same thought process in the underlying planning via machine intelligence.
What can be done to prevent such concerns from materializing? And what can be done to assure the public that there is nothing to be concerned about?
Today, the state of many technology efforts are focused on the paradigm of “reaction”, e.g., instrument monitoring for potential violations or potential secondary bad actors in the system and alerting to remediate. However, as the AI field evolves, there is an opportunity to shift from this “reactive” approach towards a “preventive” approach. This preventive approach needs to be instrumented by the underlying technology.
Further, when users interact with products and services, there is a lack of transparency around the limitations of weaknesses of its capabilities around its usage. By transforming efforts to clearly communicate this and communicate the mitigations that are put in place, this should help make strides towards improving the public awareness.