You are training a TensorFlow model on a structured dataset with 100 billion records stored in several CSV files. You need to improve the input/output execution performance. What should you do?
A.
Load the data into BigQuery, and read the data from BigQuery.
B.
Load the data into Cloud Bigtable, and read the data from Bigtable.
C.
Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage.
D.
Convert the CSV files into shards of TFRecords, and store the data in the Hadoop Distributed File System (HDFS).
C) The most suitable option for improving input/output execution performance in this scenario is C. Convert the CSV files into shards of TFRecords and store the data in Cloud Storage. This approach leverages the efficiency of TFRecords and the scalability of Cloud Storage, aligning with TensorFlow best practices.
C https://datascience.stackexchange.com/questions/16318/what-is-the-benefit-of-splitting-tfrecord-file-into-shards#:~:text=Splitting%20TFRecord%20files%20into%20shards,them%20through%20a%20training%20process.
C https://datascience.stackexchange.com/questions/16318/what-is-the-benefit-of-splitting-tfrecord-file-into-shards#:~:text=Splitting%20TFRecord%20files%20into%20shards,them%20through%20a%20training%20process.
bard: The correct answer is:
C. Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage.
TFRecords is a TensorFlow-specific binary format that is optimized for performance. Converting the CSV files into TFRecords will improve the input/output execution performance. Sharding the TFRecords will allow the data to be read in parallel, which will further improve performance.
The other options are not as likely to improve performance.
Loading the data into BigQuery or Cloud Bigtable will add an additional layer of abstraction, which can slow down performance.
Storing the TFRecords in HDFS is not likely to improve performance, as HDFS is not optimized for TensorFlow.
Using BigQuery or Bigtable may not be the most efficient option for input/output operations with TensorFlow. Storing the data in HDFS may be an option, but Cloud Storage is generally a more scalable and cost-effective solution.
While Bigtable can offer high-performance I/O capabilities, it is important to note that it is primarily designed for structured data storage and real-time access patterns. In this scenario, the focus is on optimizing input/output execution performance, and using TFRecords in Cloud Storage aligns well with that goal.
A. Load the data into BigQuery, and read the data from BigQuery.
https://cloud.google.com/blog/products/ai-machine-learning/tensorflow-enterprise-makes-accessing-data-on-google-cloud-faster-and-easier
Precisely on this link provided in other comments it whos that the best shot with tfrecords is: 18752 Records per second. In the same report it shows that bigquery is morethan 40000 recors per second
BigQuery is designed for running large-scale analytical queries, not for serving input pipelines for machine learning models like TensorFlow. BigQuery's strength is in its ability to handle complex queries over vast amounts of data, but it may not provide the optimal performance for the specific task of feeding data into a TensorFlow model.
On the other hand, converting the CSV files into shards of TFRecords and storing them in Cloud Storage (Option C) will provide better performance because TFRecords is a format designed specifically for TensorFlow. It allows for efficient storage and retrieval of data, making it a more suitable choice for improving the input/output execution performance. Additionally, Cloud Storage provides high throughput and low-latency data access, which is beneficial for training large-scale TensorFlow models.
Cloud Bigtable is typically used to process unstructured data, such as time-series data, logs, or other types of data that do not conform to a fixed schema. However, Cloud Bigtable can also be used to store structured data if necessary, such as in the case of a key-value store or a database that does not require complex relational queries.
Option C, converting the CSV files into shards of TFRecords and storing the data in Cloud Storage, is the most appropriate solution for improving input/output execution performance in this scenario
https://cloud.google.com/architecture/ml-on-gcp-best-practices#store-tabular-data-in-bigquery
BigQuery for structured data, cloud storage for unstructed data
agree. BigQuery and Cloud Storage have effectively identical storage performance, where BigQuery is optimised for structured dataset and GCS for unstructured.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
ralf_cc
Highly Voted 2 years, 11 months agotheseawillclaim
Most Recent 2 days, 23 hours agoPhilipKoku
2 weeks, 1 day agofragkris
6 months, 2 weeks agoSum_Sum
7 months, 1 week agopeetTech
8 months, 3 weeks agopeetTech
8 months, 3 weeks agoftl
9 months, 1 week agotavva_prudhvi
10 months, 2 weeks agoPST21
1 year agoVoyager2
1 year agotavva_prudhvi
11 months agoM25
1 year, 1 month agoshankalman717
1 year, 4 months agoshankalman717
1 year, 4 months agobehzadsw
1 year, 5 months agoShePiDai
1 year, 1 month agoMohamed_Mossad
2 years agohoai_nam_1512
1 year, 10 months agoDavid_ml
2 years, 1 month ago