Remove 2023 Remove Experimentation Remove Reporting Remove Risk Management
article thumbnail

3 key digital transformation priorities for 2024

CIO Business Intelligence

The analyst reports tell CIOs that generative AI should occupy the top slot on their digital transformation priorities in the coming year. Moreover, the CEOs and boards that CIOs report to don’t want to be left behind by generative AI, and many employees want to experiment with the latest generative AI capabilities in their workflows.

article thumbnail

Five open-source AI tools to know

IBM Big Data Hub

When AI algorithms, pre-trained models, and data sets are available for public use and experimentation, creative AI applications emerge as a community of volunteer enthusiasts builds upon existing work and accelerates the development of practical AI solutions. Morgan’s Athena uses Python-based open-source AI to innovate risk management.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

20 issues shaping generative AI strategies today

CIO Business Intelligence

Just look at the stats:Some 45% of 2,500 executives polled for a May 2023 report from research firm Gartner said the publicity around ChatGPT prompted them to increase their AI investments, 70% said their organization is already exploring gen AI, and 19% are in actual pilot or production mode.

article thumbnail

8 pressing needs for CIOs in 2024

CIO Business Intelligence

Adaptability and useability of AI tools For CIOs, 2023 was the year of cautious experimentation for AI tools. This will help us continue to build on our culture of continuous improvement, and the belief that everyone in the organization plays a role to encourage incident reporting practices and maintain peak security.”

article thumbnail

6 generative AI hazards IT leaders should avoid

CIO Business Intelligence

As well as a process that includes human review, and encourages experimentation and thorough evaluation of AI suggestions, guardrails need to be put in place as well to stop tasks from being fully automated when it’s not appropriate. Human reviewers should be trained to critically assess AI output, not just accept it at face value.”

IT 133