Researchers at the University of Texas at Austin are using information technology platforms to refine the art of cancer treatment by scanning petabytes of data for correlations between a cancer patient’s genome and the composition of their tumors. For processing such huge data faster, the computing platform used by the research ensures the processing is done in memory at high speeds.
Which of the computing platforms was used at the University of Texas in the given context?
a.Clusters or grids
b.Massively parallel processing (MPP)
c. High-performance computing (HPC)
e. Node-based computing
Based on the given information, the computing platform used at the University of Texas at Austin for processing the petabytes of data and achieving high-speed processing in memory is most likely:
c. High-performance computing (HPC)
High-performance computing (HPC) systems are designed to handle computationally intensive tasks and large-scale data processing. They typically utilize parallel processing and advanced hardware configurations to deliver high processing power and faster data analysis. Given the requirement of scanning petabytes of data and achieving high-speed processing, an HPC system would be a suitable choice for this research at the University of Texas at Austin.
Tesla, the auto major is well known for its self-driving cars. Thousands of data points are being generated per second from onboard sensors in the car (referred to as Edge Processing) to keep the driver and those around them safe. Not much room for error at 80 mph and AI makes sure those errors are minimal. The number of transactions to be processed in a second has to be very high in such situations. Data should be processed as soon as it is generated.
Which of the following computing limitations of big data is being discussed at Tesla?
The computing limitation of big data being discussed at Tesla, based on the given information, is:
c. Memory Bound
Processing thousands of data points per second from onboard sensors in self-driving cars requires substantial memory resources to handle the incoming data and perform real-time analysis. With the aim of minimizing errors and ensuring driver and pedestrian safety at high speeds, data needs to be processed as soon as it is generated. To achieve this, the computing system must have sufficient memory capacity and processing capabilities to handle the high influx of data and perform real-time computations effectively. Therefore, the limitation discussed can also be described as a memory-bound scenario, where the speed and efficiency of data processing are dependent on the available memory resources.
GlobalSecurity.org is the leading source of background information and developing news stories in the fields of defense, space, intelligence, WMD weapon of mass destruction, and homeland security. Launched in 2000, GlobalSecurity.org is the most comprehensive and authoritative online destination for those in need of both reliable background information and breaking news. It updates the information every hour. It has been working on Natural Language processing techniques to facilitate convenient search by its customers. It creates indices which will increase the speed of search. These indices are created in external storage systems.
Index creation and access is a continuous process and involves access to external storage. In one of the review meetings of the IT Department, it was recognized that the performance of the system has downgraded and the reason is insufficient bandwidth, Appropriate corrective actions was initiated for increasing the bandwidth. Which of the following computing limitations of big data is being discussed at Globalsecurity.org?
The computing limitation of big data being discussed at GlobalSecurity.org, based on the given information, is:
b. I/O Bound
The review meeting of the IT Department at GlobalSecurity.org highlighted the performance degradation of the system due to insufficient bandwidth. This suggests that the bottleneck affecting the system’s performance is related to input/output (I/O) operations. The continuous process of index creation and access, which involves accessing external storage systems, requires efficient data transfer between the computing system and the external storage. Insufficient bandwidth can hinder the speed at which data is transferred, leading to reduced system performance. Thus, the limitation being discussed is an I/O bound scenario where the performance is constrained by the limitations of the input/output operations and the available bandwidth.
IHDFC Bank , a new generation bank has been investing heavily into Information Technology to achieve operational efficiency , compliance and most importantly customer satisfaction . In one of its review meetings, the Head, Technology of the Bank discussed the current state of its storage systems, its adequacy and need for upgradations. A decision was taken to upgrade memory to fast optical storage some of the business critical real time customer data. The meeting also discussed what are the data to be archived to meet regulatory requirements. Which aspect of Data storage for Big Data is being discussed at IHDFC Bank?
e.Integration with legacy systems
The aspect of Data storage for Big Data being discussed at IHDFC Bank, based on the given information, is:
d. Tiered architecture
The decision to upgrade the memory to fast optical storage for some of the business-critical real-time customer data suggests the implementation of a tiered architecture for data storage. Tiered architecture involves organizing data into different tiers or levels based on their importance, frequency of access, and performance requirements. In this case, the bank recognizes the need to upgrade the storage systems to improve operational efficiency and ensure timely access to critical customer data. By implementing a tiered architecture, the bank can allocate different storage technologies or solutions based on the specific requirements of the data, ensuring that business-critical real-time customer data is stored on fast and efficient storage systems.
One of the well-known applications of IBM Watson has been the ‘Watson for Oncology’ application which IBM developed in partnership with New York’s Memorial Sloan Kettering Cancer Centre (MSK). MSK oncologists are known experts in certain types of cancers. IBM Watson can be trained to take on their expertise, and then the knowledge becomes available to any doctor from any corner of the world. IBM technical team debated on the type of file system to be used for storing huge data for analysis. The requirement is for including all types of data like text, images (CT Scans) and also other unstructured data. It should be possible to dump all data without any set up process. Finally they have taken a decision for adopting a file system meeting the requirements. Which of the following is chosen as the File Systems for storage of Big Data at IBM for the analysis of huge Cancer data?
a. Relational Database
b.Distributed Data base
c.HDFS ( Hadoop File system)
e.Extended Relational database
The chosen File System for storage of Big Data at IBM for the analysis of huge Cancer data, based on the given information, is:
c. HDFS (Hadoop File System)
The requirement to store all types of data, including text, images (CT Scans), and other unstructured data, without any set-up process aligns with the capabilities of the Hadoop Distributed File System (HDFS). HDFS is a highly scalable and distributed file system designed to handle large volumes of data across multiple nodes in a cluster. It is specifically designed for storing and processing Big Data in a fault-tolerant and distributed manner. Given the vast amount of cancer-related data, including unstructured data like images, HDFS provides the scalability and flexibility required for efficient storage and analysis.
9748882085 | 7980975679 | 9836953772