Error converting data type nvarchar to float

Hi, I came across this error "Error converting data type nvarchar to float." from the UI but to my surprise it did not throw any error when I ran the procedure from SSMS. But again the same procedure with same params was throwing an error in ssms when run from another machine. I created a copy of the procedure just to debug the culprit statement and the copy of the proc ran fine (in the same machine via ssms where the original procedure had failed). This had to do with either some SET option settings or maybe a bad plan in the cache.

So finally I re-created the original procedure and the error went away, maybe due to the right plan now being created and picked up. Has anyone seen this before? Can someone throw some light on this.


At a guess - there is most likely bad data in the column. On the procedure that was failing - and the plan that was failing - the offending row was processed through the conversion. On the non-offending plan(s) - that row is excluded prior to being processed through the conversion.

Thanks Jeff, yes I agree with you. The column is a nvarchar and being converted to a float, something like this: Convert(Float, IsNull(NVarcharCol,0))

But what I still am not able to grasp is why did it fail in one SSMS while on another machine it ran fine in SSMS. This could be due to SET Arithabort Off in the one it was failing, at least that's what I thought so but not able to reproduce this now.

It probably failed on the one SSMS due to a different plan being utilized from that system - where the other machine in SSMS used a different plan. That could be caused by different settings or how the procedure is actually written. If objects are not schema qualified - and the users have different default schemas - then separate plans would be generated depending on which user is executing the code.

When you recreated the procedure - a new plan was generated which avoids the 'bad' data.

Since you now know you have bad data - switch to using try_convert instead of just convert. This way the bad data will be converted to NULL and you can avoid the issue when the plan changes again.

I tried converting to try_convert earlier but it failed even though we are on Sql 2014, so I checked the compatibility and it was at 100 which means Sql 2008. Apart from the new CE what other benefits or what other things can sql improve from if we switch the compatibility level (the new t-sql features of 2014 is one of them). I know it should have been 120 all along, do you see any major issue in terms of query performance that we might be missing out on?


The only way to know for sure is to test your code in the updated compatibility level. If you have code that is using deprecated features - that code would need to be updated before switching compatibility level.

Yes I guess that is the only way to check, to change the compatibility level.

One more thing, as part of our maintenance activity we run a weekly rebuild index for all the tables on Sundays. Now this issue came up on Monday while it was running fine on other lower environments, we came across something similar few weeks back when a wrong plan was being picked in production and that too came on Monday i.e. immediately following the index rebuild; lower environments worked fine so I did an index rebuild on Stage and the error was reproduced.

I have been reading articles about how index rebuilds can make queries go slower or it can pick a wrong plan. Now this issue has resurfaced again after the maintenance activity. I am wondering if we should decrease the frequency of rebuilds as you never know when it can occur again and we don't have fool proof solution to tackle this except put recompile options.

Any thoughts on the above and do appreciate all your inputs and suggestions.

Personally, I'm of the opinion that you shouldn't rebuild indexes unless there is an actual problem occurring. The "problems" that can arise from not doing so are frequently overstated and the side effects from rebuilding them on a schedule are often worse.

Thanks Andy, I too am now leaning towards either completely eliminating the rebuilds or at least reducing the frequency as this is really beginning to hurt us now though sporadically.

It isn't the index rebuilds that cause the problem - it is the statistics that are updated during the rebuild process. If you don't have a process that is updating statistics and you stop rebuilding indexes - you will end up having even worse performance issues.

Rebuilding indexes does not cause problems - and any so-called side effects are because of incorrectly set fill factors on the indexes being rebuilt. If you rebuild an index with a 100% fill factor that contains varchar columns that frequently change in size - then you will suffer a high level of page splits because that data cannot 'grow' into the existing index page.

Allowing an index to page split and naturally balance out across all pages could still cause significant issues. When that occurs the new pages are not contiguous (new pages are added at the end of the tree) - and since SQL Server reads data in an extent you end up reading more data into memory than may be necessary.

Back to the issue at hand - the problem is bad data that you are attempting to process. This bad data ends up going through the convert when the new plan is generated (which occurs because statistics have been updated) - and then when you recreate the procedure a new plan is generated the first time that procedure is called causing a different plan that filters out the bad data prior to going through the convert operator.

This can be resolved by using try_convert (once up change compatibility level) - or by forcing the plan (OPTIMIZE FOR) or rewriting the code or fixing the bad data.

Jeff thanks for the detailed explanation, I will see what best I can do here.

If the indexes are fragmented you must run the rebuild or reorganize depending on the fragmentation levels.
Ola Hallengren has a wonderful script which does this.

Thanks Ahmed, posting the link here:

Check out this link. This may help you out:

Thanks jason for the article, I was not aware of the following:
But your second problem is that SQL can process the query in any order it chooses. So even if your WHERE clause only chooses rows that can be converted to float, SQL might convert ALL of the rows to float and only later check the WHERE clause and throw away the rows that don't match the WHERE clause. Of course, in your case, if SQL does this, you geet a conversion error before the WHERE clause is processed and the statement fails.
This is very interesting, thanks for pointing it out.

1 Like