AI is Beautiful: Careful with that Blackbox Eugene
Professional data scientists frequently use artificial intelligence (AI) to help achieve goals. AI adds significant value when used properly. Building AI into data science processes is awesome fun and sometimes mind blowing.
Every year and every month AI gets better, stronger and smarter. Sometimes AI will exhibit unique conceptual frameworks that humans would not think of - adding massive competitive advantage when applied to a myriad of processes.
AI can be a beautiful thing.
Yet AI has some major downside risks and can be potentially dangerous when used improperly. The big risk is the inability of humans to understand how AI reasoned and made certain inferences and conclusions. AI is a "blackbox" that uses different forms of intelligence, reasoning and conceptual frameworks. Sometimes humans can figure out how the blackbox reasoned. Sometimes humans have no clue how the blackbox doped out a problem and selected a certain scenario in complex high causal density environments - and this can be a major problem.
The "blackbox" challenge is serious and disconcerting - especially when AI goes off the reservation and does something unexpected and wrong - sometimes causing damage. Moreover, when you can't understand why the AI acted as it did - you may or may not easily re-train and correct the problem.
Seasoned data scientists use AI for advantage yet always within data science processes to monitor, modify and constantly improve AI performance to mitigate downside risks and prevent potential damage.