Request a Quote

Thank you for reaching out to us! Please fill out and submit the form below and we will get back to you as soon as we are able to. Existing clients may Contact Us online or submit a Support Request.

Your Contact Information

What Services are you interested in?

Project Schedule

Additional Information

Infrastructure Challenges for A.I.
Mar 16,
2020
Infrastructure Challenges for A.I.
Jatin Interviews Dr. Cynthia M Beath, PhD, MBA, Professor Emerita, IROM Department at University of Texas.

As part of our Blog Series on Artificial Intelligence / Machine Learning (AI/ML) we interviewed Dr. Cynthia M. Beath, PhD, MBA, Professor Emerita, Department of Information, Risk and Operations Management (IROM), McCombs School of Business, University of Texas at Austin.  She is a researcher under the aegis of the Center for Information Systems Research, at the Sloan School of Management, MIT.

Thanks for your time. Before we start, could you let us know a little about yourself?

Center for Information Systems Reserach, Sloan School of Management, MIT

I'm a Professor Emeritus at the McCombs School of Business at the University of Texas at Austin. (I retired in 2005!) Although I no longer teach, I still do research under the aegis of the Center for Information Systems Research, at the Sloan School of Management, MIT.

I study what big, established companies must do to get value from their investments in information systems and information technology (including AI).

For training purposes AI needs a massive amount of data. Owning and managing On-site systems for the same could be highly expensive and moving such huge data in/out of clouds is equally hard- some would say almost impractical- in the present ecosystem. What are your thoughts on the same and how to overcome this challenge? 

There is technology designed for managing large amounts of data that runs in ordinary data center environments, as well as tools for managing even larger amounts of data in both public and private cloud environments. Some firms, especially information businesses and public sector organizations, are already managing huge amounts of data quite successfully. Managing data that is stored in the cloud is an art and quite different from managing data stored on-premises, so new techniques and approaches must be mastered. Driving a sports car is not like driving a truck, but a truck driver can learn to drive a sports car, and vice versa. So it's different, not difficult.

The massive speed and parallel performance requirements for artificial intelligence will definitely strain the data centers. Do we have sufficient cooling approaches for the same? 

I am not an expert on data center cooling, but as far as I know, cooling is not a problem in data centers offering AI model development services.  You are right that some AI tools work better with special high-performance processors, but data centers that provide these processors are designed accordingly, including their power and cooling requirements. 

AI is still a largely unexplored landscape and it largely depends on the datasets which means there could be many unforeseen data center/processing issues in the future. Are we prepared for the same to avoid “blind alley” situations? 

Actually, while machine learning does require a lot of data (with which models are trained), the challenge of AI is greater downstream from the machine learning effort -- the challenge is in getting AI models into use, changing work or usage to adapt to new probabilistic decision-making situations, and building trust in the models.  Getting value out of machine learning is a lot harder than developing a machine learning model.

The efficiency of AI training depends upon extremely fast processing and massive performance requiring petabytes of capacity. The present day storage and data centers heavily depend on virtualization but that approach is not believed to gel well with AI requirements according to some experts. Physical proximity of hardware is required for real-time benefits. What are your thoughts on this? 

There are two different processing environments for AI -- the first is the environment in which one develops a machine learning model that becomes "intelligent" enough to predict something -- that is the environment that requires massive amounts of data and processing power.  Second, there is the processing environment in which one applies the new "intelligent" model to a specific situation. That requires just an environment with normal processing power. In the first environment, I might develop, say, a model that will predict, with some level of accuracy, whether a person with certain characteristics is or is not a good credit risk. In the second environment, someone might decide, using that model, whether you personally are a good credit risk.  The first process is data and processor intensive.

The second one, for most business applications, is not. Now, there are some AI models that "learn on the fly" -- that is, new data is constantly triggering adaptations in the decision model, but even then the amount of new data is not overwhelmingly large nor are the processing requirements.  Moreover, in "learning on the fly" types of situations, which are often built into a physical device, special purpose processors required are typically designed along with the device. For most business settings, the situation is more like the credit decision -- two steps.

The first step (learning whether or not to give credit) will be re-run periodically in the high performance environment, driven by changes in performance parameters or changes in the business environment, and so forth. The second step (applying the model to make credit decisions) will run in a normal processing environment.

Are we moving towards bridging this crucial gap? Well, yes we are, but I'm not very worried about the technical gap.  I think data centers are bridging that gap much faster than organizations are bridging the gap between what they imagine they could do with AI and what they can actually do with AI.

Along with massive data sets AI also requires quicker, more parallel access to the same. AI needs deep learning which requires lots of real life data. Moving such a huge personal data to the cloud (not just profiles and ID’s but minute personality traits like accent, health history, speaking/thinking patterns etc.) … wouldn’t compromise the privacy of individuals? 

It might seem counterintuitive, but most AI models depend less on data about individuals and more on data about the context in which those individuals operate. For example, the credit worthiness of a business is more driven by the strength of the economy surrounding the business than by the personal habits of the business owner. At the same time, to the extent that AI modelers start out with the (incorrect) premise that the relevant factors are the personal habits of the business owner, they will attempt to collect this data, which is a huge problem for all of us (not just the hapless business owner). 

How can it be guaranteed that such data (a part of it) wouldn’t be exposed to threats or statutory mandates due to government regulations of the country where the data center is located? 

The only guarantee is the good will and good ethics of the companies doing the AI modelling. Statutory mandates help mainly because they instantiate and make clear the values of the public. Industry standards are perhaps more effective in setting limits on individual companies, with public shaming seeming to be the last resort.

AI needs extensive resources in terms of CPU, DRAMs and storage and the dynamic, ongoing nature of AI research means the need would keep on increasing, probably exponentially, in the future. However, it is expected to take around 20 years before we can double the CPU performance. Likewise we might have to wait for 10 years before the DRAMs and disks capacity is doubled. Does it mean that there is a huge gap between the processing requirements of AI and the present infrastructure resources (and their growth rate)? 

I don't know anything about the future of chip performance, but as I said earlier, I think the technical problems are far outweighed by the more practical problems of developing models that work, can be trusted, are used, and make a difference to the bottom line.

Big Data not only needs massive amounts of data but it also needs the real-time dynamically changing data. Are the present data centers and their processing technologies refined enough to store, utilize, and process such huge amounts of data without unreasonable delays?

Dr. Cynthia M. Beath, PhD, MBA

Please see above.  For most business applications, even though data changes dynamically, it is not necessary to "re-teach" the AI model, and it is only the "learning" that is data and processor intensive. You really have to dig down into models to discover that they are not learning on the fly -- many people think that Siri is learning on the fly -- she's not. At best you could say she is parameterizing her model, but you can see that Siri is neither data intensive nor processor intensive or at least not more data or processor intensive that my little smart phone can handle.

Thanks again for your time. Before we wrap up, would you like to share your advice regarding the realistic approach to AI (and its capabilities).

Start small -- run a lot of experiments -- but always keep your eye on the future and make sure that you are building reusable and shareable capabilities -- in people, in technology, in data, and especially in an understanding of how computational statistics differs from mathematical statistics! Always be learning!

Cynthia received her B.A. from Duke University and her M.B.A. and Ph.D. degrees from UCLA. Her research interests include the study of vendor-client relationships, the management of information services, and information system implementation and maintenance.

She has received numerous Professional Awards including SIM/APC Research Grant, NSF DRMS Program Grant, Fulbright Scholar Award, and a U.S. Department of Education grant in addition to Teaching Awards at SMU Cox School for faculty mentoring and the BBA Program. She has also authored or co-authored over 30 publications on Information Systems and Information Technology.

  View Dr. Beath’s Profile at UTexas.edu

Disclaimer: The opinion of Insercorp Water Cooler Bloggers are of their own and do not reflect the official position(s) of Insercorp LTD.  Insercorp LTD is not affiliated with University of Texas or MIT and no incentive was offered or received related to this story.

Jitendra Bhojwani
Tech Blogger
@Jatin is one of the Water Cooler's Contributing Bloggers, focusing on Technology and AI/ML (Artificial Intelligence / Machine Learning).
Leave a Comment!

Comments

Your comment has been successfully submitted and will be posted when reviewed and approved.
User Comments
Google Icon
Youtube Icon
Linkedin Icon
Pinterest Icon
Instagram Icon
Contact Icon