Low I/O performance during reading a big table

Hello for all!

I have a large database - more than 2 Tb. These database has two large tables of over than 400 Gb each. I need to export these tables to file for external processing by my partner.

For it I run simple select query without "where" condition.

It must read all the rows in the table. But this happening very slowly.

At the same time, I have the following conditions:

  • the database is running in a virtual machine
  • only one session exists on entire database - it's me
  • only one select is running, which I wrote about above
  • the disk on which the database files are located is fast - more than 400 Megabytes per second per read
  • but the database (SQL Server) reads data only at a rate of 6-10 Mb/sec
  • In the performance statistics, I see a lot of pageiolatch_ex waits
  • if I run selections from other tables in several sessions, then MS SQL can load a disk up to 200-300 MB/sec
  • but reading data from one large table in one session is very slow, it takes more than a day!
  • Resource Governor is disabled

I have no idea. I need to speed up this process, as I have to do it often at the moment.

Colleagues, what could be the reason?

Best regards,

c0ff75

Are you using the bcp utility?

That would be my recommendation. I'm using it now to export a multi-terabyte table in batches. It will use very little memory and very few PAGEIOLATCHes. I typically see 100MB/s or better with the queries I use for export.

3 Likes