I want to learn how to work with SQL Server and use Access as my front end or create one in another language if I have to. I'm not sure I am doing this right, though. I've installed the Developer edition 2016 (since I'm developing, right?) and I have the SSMS installed. I also have Visual Studio 2017 installed, Express or Community Edition.
I created a server instance: MSSQLSERVER, no biggie, right? This server shows up in Visual Studio. Ok. So I created a database Inv001.mdf in Visual Studio. Added a couple of tables with some data. Now, I would like to create a connection to this mdf from Access. So, I read that I need an ODBC driver, and I selected the 32 bit, which is what I need for Access.
I went through the basic create a new data source, and no matter what I tried, which server I selected (local or the machine named one) I get the same Login timeout error message when I try to change the default database.
So, in the SSMS, I discovered that my database was not on the list. So, I copied the mdf file to a folder the SSMS dialog could see, and attached it to the server. But I still get a timeout error.
What am I missing here?
I did post a while ago regarding a connection problem, if you happen to look up other posts, but that problem is gone because that hard drive is gone, too,
My guess is that Visual Studio created the database using some flavour of SQL Server that was not the Developer Edition that you installed. Frustrating I'm sure ...
Personally I think you would be better off creating a database in SSMS and then trying to make everything else connect to that. Using ATTACH in SSMS, for a file that you already have copied to an appropriate location, should be just as good. (I'd prefer that you copied both MDF and LDF files and attached them as a pair, rather than doing a Single File Attach of an MDF file alone)
The timeout thing is curious. I would expect:
Login with correct password - "instant"
Login with INcorrect password - "instant" but of course "Login failure"
Attempt to login to a server/location that cannot be found - significant time before then getting a "Not found" type error messages. A Timeout at login doesn't really fit into any of those scenarios, although its similar to this last one.
Type: database engine
Name: computer name
Auth: windows auth
Database engine - local server groups - computer name
computer name - slq server 13.0.4202.2 domain\user name
database - Inv001 (the mdf file I copied).
What is an ldf file?
open ODBC 32 bit from control panel
add sql server native client 11.0
Server: computer name\MSSQLSERVER
check "With Integrated windows auth"
check "Change default database to:"
Connection failed error after clicking the drop down to change default database.
Server Error: 87
Native Client 11.0 Connection string is not valid
Server error: 0
native client 11.0 login timeout expired
I will copy the ldf file over and detach the mdf and try to attach both files together.
Ha! Very nice. I was getting the info from a youtube channel, usually this guy is spot on for every programming topic I've watched, but I suppose you run into differences depending on system and services configurations.
Thank you so much.
As an appendage for Kristen:
I got the same error even after copying and attaching the ldf file, but as Jeff points out, I did not need the extra MSSQLSERVER in there. I should have posted that earlier as well, but that is why we are here: to LEARN!
An MS SQL Server database should have an MDF file for Data and and LDF file for Transaction Log (it can have other files, and they don't have to have those particular filename extensions, but by default you will get two files, with those filename extensions, when you create a database).
The log file enables SQL to roll back, or roll forwards, a transaction. For example, if you get a power failure on your Server then as SQL restarts any transactions that were "in progress" will either be rolled forwards (if they were committed) otherwise they will be rolled back, and as a result the database will be in a consistent state.
The TLog also allows you to explicitly Begin a Transaction, perform multiple data changes, and then either Commit (if the outcome has no problems) or Rollback if it did. Either all the transactions happen as-one, or none of them do. That stops transactions getting part-way through and then failing (logical error such as Foreign key violation, programming error preventing the code continuing, or something like contention causing a deadlock) and leaving the database with just the first part of the data changes applied.
You stated that you recommend building the database from within the SSMS, correct?
That means that it has all the capabilities I would need to create, update, append, delete, etc? Create FK, change properties...?
As a side question: what about file location? As I am learning to play with SSMS, Visual Studio, SQL Server Dev/Express, how would you recommend file storage location setup? Perhaps a follow up - are any of those 'other' files necessary for the use/updating of a database?
You can use GUI Design Tools to create all those ... or write them in SQL and EXEC that code (fine if you are an Expert and know all the SQL off the top of your head!!) ... or ... use the Design Tool, but do not press the SAVE button and instead press the SCRIPT button. That will generate a script that makes whatever Create / Modify Table, FKey, Property change etc. and might help you to learn how such DDL actions are actually done in SQL code.
Also, if you need to make a change to a DEV database, and then apply that change to a TEST database and, finally, to the PRODUCTION database, using the SCRIPT button and then saving the SQL code in a file, will allow you to, subsequently, run exactly the same DDL changes on TEST / PRODUCTION.
I expect you can do all that in Visual Studio too, but that's not something I know enough about to give you advice on.
Out-of-the-box SQL usually stored the MDF and LDF files in a folder underneath the installed programs - so basically your data files get created in C:\Program Files\Microsoft SQL Server...
This is obviously a bad idea!
You can set/change default locations during SQL install, provided that you know that it is important to do that. You can also alter the Server Setting for the default location for MDF and/or LDF ... but any databases you have already created, including the System Databases, will be in the original location chosen during SQL install.
Probably not too important whilst you are learning ... definitely needs thought for a Production system.
My suggestions would be to try to achieve (in order of significance):
Different drives for MDF and LDF. Ideally MDF chosen for good random access performance and LDF for sequential file access
Different drive for Backup Files
(apart from performance this also means that a single drive, or disk controller, failure is likely to leave the other drives unaffected, so if your Data File is corrupted your Log file won't be (and you can take a Tail log Backup, restore the last Full backup, all Log backups since, and have ZERO data loss). If the Log drive fails you likely still have an un-corrupted Data file, so can sort out from that, again with zero data loss.
For further performance improvements consider putting the TEMPDB system database on a different drive.
I doubt very much that you need to do any of that for an initial "learning SQL development environment", but if you have two drives available I recommend splitting MDF / LDF locations
Next up consider whether to use FULL or SIMPLE Recovery Model (this is set per database).
SIMPLE will reuse the LDF file space as soon as each transaction is completely (slight over simplification). You can only restore to the last FULL backup that you made, or the last FULL backup plus a subsequent DIFFERENTIAL backup based on that FULL backup). For most people this would be restoring to a once-a-day backup
FULL will keep all transaction log entries until they are backed up. The LDF file will grow (until backed up). You can restore from the last FULL backup, plus an optional DIFFERENTIAL backup, PLUS all the TLog backups since. If you want to restore to a point-in-time in-between two TLog backups there is even a STOPAT Date/Time command so you can restore to e.g. precisely 10:15 yesterday morning.
Unless you are able to easily recreate all the data in your database (e.g. because you import 100% of it from somewhere else, rather than typing new data into the DB) then I recommend you use FULL and also that you make a Log Backup every few minutes. There is no point making a log backup once a day - your LDF fie will grow big enough to hold a whole day's transactions, and if your server breaks as 23:59, just before the daily log backup, you still lose the lot! whereas if you backup the Log every 5 minutes that will be your maximum data loss.
We set ALL Dev databases to FULL because in DEV we typically need to restore to "just before I did that stupid thing!!"
If you go down that route I suggest you deliberately make a restore to point-in-time, just to prove that you can, before you really REALLY need to do it in ernest