April, 2013

article thumbnail

The Lean Analytics Cycle: Metrics > Hypothesis > Experiment > Act

Occam's Razor

'This should not be news to you. To win in business you need to follow this process: Metrics > Hypothesis > Experiment > Act. Online, offline or nonline. Yet this structure rarely exists in companies. We are far too enamored with data collection and reporting the standard metrics we love because others love them because someone else said they were nice so many years ago.

Metrics 156
article thumbnail

Software is Also Eating the Data Center

Nutanix

Most big data center players still cling to the hardware-based models of yore. But the growing ubiquity of the hypervisor as the new data center O/S means that software-defined technologies will increasingly challenge the status quo.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

The VMware Horizon Suite on Nutanix: A Match Made in Heaven?

Nutanix

I recently had the opportunity to head over to the VMware HQ and conduct a white boarding session on the VMware Horizon Suite on Nutanix solution.

20
article thumbnail

Eight Silly Data Things Marketing People Believe That Get Them Fired.

Occam's Razor

'It turns out that Marketers, especially Digital Marketers, make really silly mistakes when it comes to data. Big data. Small data. Any data. In the last couple months I've spent a lot of time with senior level marketers on three different continents. Some of them are quite successful, sadly many of them were not. In the latter group I discovered that there were two common themes. 1.

Marketing 166
article thumbnail

Peak Performance: Continuous Testing & Evaluation of LLM-Based Applications

Speaker: Aarushi Kansal, AI Leader & Author and Tony Karrer, Founder & CTO at Aggregage

Software leaders who are building applications based on Large Language Models (LLMs) often find it a challenge to achieve reliability. It’s no surprise given the non-deterministic nature of LLMs. To effectively create reliable LLM-based (often with RAG) applications, extensive testing and evaluation processes are crucial. This often ends up involving meticulous adjustments to prompts.