SELECT TOP 10 from a table with millions of rows, where only 9 can be found, will depend on the indexes available.
If the Criteria (WHERE clause etc.) for the query is "covered" by an INDEX then finding 9-only, or first-10, will be very quick.
If there is no suitable index then SQL will use a Table Scan - i.e. check each row in turn. Yes, finding 9-only will be slow, but finding 10 may be slow too - maybe the 10th, and last, record to be found is right at the "end" of the table scan? So it will be chance whether SQL finds all the rows quickly (i.e. they happen to be at the start of the table scan), or not.
The best answer is "Create an index so that the Query is always fast" 
If your query uses a table scan then it is always going to check all rows to get a COUNT() 
I would not want to run the query twice - to get the COUNT and to get the TOP 10 - although when you do the COUNT that will load the Cache, so the TOP 10 query might then be faster - if the data is already in Cache.
I would try doing a COUNT and TOP 10 in one query, and comparing that with the speed of just doing the TOP 10 - maybe it will be very similar.
So maybe compare
SELECT [TotalRows] = COUNT(*)
FROM MyTable
WHERE TestCol1 = 123 AND TestCol2 = 'XYZ'
--
SELECT TOP 10 Col1, Col2, ...
FROM MyTable
WHERE TestCol1 = 123 AND TestCol2 = 'XYZ'
(NOTE: I presume you have NO Order By, because that would require a complete table scan and THEN sort to find the TOP 10)
with
SELECT TOP 10 [TotalRows] = COUNT(*) OVER()
, Col1, Col2, ...
FROM MyTable
WHERE TestCol1 = 123 AND TestCol2 = 'XYZ'
One way to check the relative "cost" of each query is to wrap it with
SET STATISTICS IO ON; SET STATISTICS TIME ON
... your query here ...
SET STATISTICS IO OFF; SET STATISTICS TIME OFF;
GO
and compare the Scan Count and Logical Reads (NOT the Physical Reads).
You can clear cache (before each test) with:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
but you should not use those on a Production Server 