SQLTeam.com | Weblogs | Forums

Insert performace

I have a table with over '1272137' records, I'm having an issue when inserting rows.

I have a script that inserts a rows containing 6 columns and a identity column that is automatically populated by SQL.

I'm now getting very slow inserting issues to this table.

Any ideas?

Post your script please.

Function Capture_UserActivity()
	LogTheEvent("I", "Capturing User Activity Details")
	Dim $cn, $rs, $cmd, $cmdtext, $cnstring
	$cnstring = "Driver={SQL Server};Server=$SQL_Server;database=$SQL_Database;uid=$SQL_User;pwd=$SQL_Pass" 
	$cn = CreateObject("adodb.connection") 
	$cmd = CreateObject("adodb.command") 
	$rs = CreateObject("adodb.recordset") 
	$cn.connectionstring = $cnstring
	$cmd.activeconnection = $cn
	$rs.cursortype = 1
	$rs.locktype = 3
	$rs.activecommand = $cmd
	$cmdtext = "Select * from tbl_UserActivity"
	LogTheEvent("C", "SQL String: '$cmdtext'")
	$cmd.commandtext = $cmdtext
	$rs.fields.item("DateStamp").value = $Stamp_Date
	$rs.fields.item("TimeStamp").value = $Stamp_Time 
	$rs.fields.item("UserID").value = $UserID
	$rs.fields.item("DeviceID").value = $DeviceID	
	$rs.fields.item("IPAddress").value = $SeeesionIP	
	$rs.fields.item("ActivityID").value = $ActivityID		

An example of an entry would be


That will retrieve EVERY column from EVERY row in that table. Even if that is what you want you should always name the columns.

I don't know if your application code $rs.Open($cmd) ACTUALLY RUNS that query, and if so if that is necessary in order to then do an ADDNEW? - if it is not actually running the query then that's not a problem, but you should still explicitly name only the columns you need.

(I would debug that by displaying the time before/after that $rs.Open($cmd) command and seeing if that line is what is taking the time (unless you know that that is a PREPARE command rather than an EXECUTE one)

If you use SELECT * someone, in the future, might add an IMAGE column with 10MB per row in it (maybe they already did?!!!) and that will crucify your query ... and ... at that time you will have to Find & Fix every single query that uses SELECT * ... and retest the APP ... and deploy all that. The reason I know is that I was called in to solve exactly that problem where the Call Centre said "Can we have a little Notes field where we can record the details of the call in case someone else picks up the call next time" and their nice developers added that one column to several tables in the DB and ... guess what ... the call centre folk wrote their life's history in there! and the developers had used SELECT * in every query throughout the whole APP :frowning:

The other likely problem is that the table is suffering (now it has got big) from a number of possible ailments:

  1. If it does not have a CLUSTERED index then housekeeping may not be adequately removing / reusing space from deleted rows. Solution is to create a Clustered Index (or make the Primary Key on the table CLUSTERED). It would be much better to create a Unique clustered index if you can, and if you have a choice select a column(s) that is NOT NULL and has the narrowest column(s), for the key(s), that make the clustered unique.

  2. Then Rebuild the all indexes on the table

  3. Then Update all the statistics on the table (somewhat redundant as the Index Rebuild will have updated stats too, but there are likely to be other non-index Stats)

If that sorts it out then it probably means that you don't have any housekeeping running on your database, or it is not running often enough, or it is running often, but for a limited time, and is not prioritising the "most needy tables", in which case it would be worth putting that right, going forwards.

P.S. If there are no indexes on the table at all then queries are likely to be slow, so you might want to look at adding appropriate indexes that "cover" frequently used queries, JOINS, and so on.

when was the last time the database backed up along with logs?

Oooooh! I'd forgotten about that.

Those same people who did SELECT * called me back (very nice client, lots of repeat business!!) when they failed to backup their LOG file after moving to a new server ... they had installed a massive disk, it only had that one database MDF file and the LDF file ... which was, by then, super massive and causing all sorts of performance problems. I can;t remember now, but it was probably set to expand by 1MB at a time and thus massively fragmented ... or maybe 10% which, back then when SQL actually stopped to initialise the file extension, would have been a huge block on all activity once the LDF got to 10GB and it then extended by 1GB ...

just got bit by one this week. I tried all kind of junk: index, query planner, you name it. then someone asked "have you ever taken a back up?" "No we dont care if 8 of our sql servers die", :rotating_light:. server has been up and running with our engineers that want to do their thing but want our help when they hurting. so it was a good time to suggest to them strongly for them to focus on their engineering and IT on databases. took backup of bak and trn, took a looooong time. 6 minute queries now down to sub-seconds. :dancing_men:

I hope you put a run-once-and-self-delete job in place to cause another "You really-really need us" outcome in a few weeks time? :heart_eyes:

but I need repeated business hehe