Chapter 16

Using databases with X#

Finally, the chapter you might been waiting for 15 other chapters. Or you just skipped to this chapter, leaving out the "boring" rest. Since the beginning of "Personal Programming" in the mid-80s and early '90s, the database was the center of the vast majority of Windows applications. From my modest perspective and observations, this hasn't changed much in the year 2023. Despite all the talk about cloud-based applications (which also use a standard DBMS in many cases), the database is still the most important asset for the majority of desktop and web applications.

This chapter is mainly about the classes that the .Net runtime offers for accessing any kind of DMBS.

This chapter is not about DBServer, RDD, or tools like ReportPro. These components are (still) important and thanks to the dedicated development team are still under active development, but they are not part of the .Net runtime which is the main focus of this book.

And this chapter is not about integrating data into a WinForms or WPF application. You will find out that once you understand the basics of database access with .Net, the "rest" can be implemented with little effort. One thing to keep in mind though is to accept the fact that there is no "integrated database access" in either WinForms or WPF. A WinForms application doesn't care about where the data comes from. If UI data binding should be used, all it needs is an array, a DataTable, or better a generic list with "Plain Old Objects" (POCO). But where the data originally came from does not matter.

"X# core" does not have any database-specific commands or functions. It's all part of the .Net runtime and its classes in the namespace System.Data and the provider-specific namespaces like System.Data.SqlClient.


NOTE: Since not everybody might be happy with my "100% pure .Net" approach in this chapter about such an important topic, I recommend getting an overview over RDD which still can be a very convenient and satisfying alternative by watching the videos by the X# academy on YouTube (just search for "X# RDD"). And if there is a certain need for more "info" on these topics, it is no problem at all to extend this chapter with new topics like a comparison between DBServer and "ADO.Net" for the next edition (thanks to the way "print on demand" works, this could be anytime shortly).


The sample databases for this chapter

A little housekeeping first. Many of the samples in this chapter use the well known Mondial database which contains a lot of information about the countries of the world. If you want to try out all the samples (which I recommend because that's what this book is about), you should take a moment and familiarize yourself with the schema.

You will find the complete schema at the website of the Universität Göttingen because they use the database for their teaching classes:

https://www.dbis.informatik.uni-goettingen.de/Mondial/mondial-ER.pdf

Since the samples in this chapter only use a small portion of the schema, my own version of that schema contains only three of a total of 46 table definitions for a better understanding of the samples.


NOTE: The Mondial database is very good for trying out all the different query options, joins, groupings, etc. Updating the database is possible too, of course, but it can be sometimes a little complicated due to several foreign key constraints.


Alt A few table definitions as part of the mondial database schema Fig 16.1: A few table definitions as part of the Mondial database schema

The Mondial database for this chapter is provided as an SQLite database file mondial.db that is part of the samples directory. This directory also contains the needed SQLite provider for .Net as a single file with the name System.Data.SQLite.dll.

A few examples in this chapter use another SQLite database: booklib.db3 (the only reason for this extension is to show that the extension doesn't matter). This database file is also contained in the samples directory.

An even better approach to learning about the schema of a SQLite database is to use one of the several SQLite GUIs like DB Browser for SQLite. Both Visual Studio and Visual Studio can display the content of a SQLite database file either directly or after installing one of several extensions.

XS_MondialSchemaDBAdmin Fig 16.2: DBBrowser for SQLite makes administrating a SQLite database file simple

The tiny BookLib database has a schema too. The interesting table is BookCopy because it contains several fields with different data types.

Alt The schema of the tiny booklib database Fig 16.3: The schema of the tiny booklib database

And just in case you have skipped all the other chapters of this book, here is how you clone all the samples from the Github repo for this book:

  1. Install Git for Windows (if it's not already installed)
  2. Open a command line window (I recommend PowerShell although it does not matter)
  3. Cd into an appropriate directory (like documents) - DO NOT stay in System32
  4. Type in the following command:

git clone https://github.com/pemo11/X-Kompendium-Examples

It should take a few seconds and a new directory X-Kompendium-Examples is created that contains all the samples organized by chapters. You don't have to do anything more with Git than to "download" the sample files.

Download the SQLite data provider for .Net

This step is not necessary for using the sample programs of this book because the required file System.Data.SQLite.dll is part of the samples directory. But as always there is a little catch: The version of the file is 1.0.117.0 and the architecture is x64 which means it cannot be used on a 32-bit system. So, in some rare situations, it might be necessary to install another version of this file.


NOTE: The version of System.Data.SQLite.dll for "AnyCPU" needs another file: SQLite.Interop.dll. This is not a .Net assembly. This library contains the real SQLite code. It's written in C++. But remember, you can not include a reference to this file because it's not an assembly. It has to be in the same directory as the System.Data.SQLite.dll.



TIP: This is an important tip. Although it's no problem to download the DLL from the project website like in the good old days, with Visual Studio I recommend installing it as a Nuget package. It's more convenient and the right "platform version" will be installed (and you will avoid the nasty " There was a mismatch between the processor architecture of the project ..." warning).


Use the following steps:

  1. Right-click the project entry in the solution explorer and choose Manage Nuget packages....
  2. Select the Browse tab.
  3. Type in "System.Data.Sqlite" to search the Microsoft package repo.
  4. Select the appropriate package (usually the first one).
  5. Turn your eyes to the right side where all the details of the package are listed and click Install. If the button is greyed out, the package is already installed for this project.

The package will be downloaded to the packages subdirectory in the project directory and the required references will be set. There is nothing more to do.

Alt Installing System.Data.SQLite as a Nuget Package Fig 16.4: Installing System.Data.Sqlite as a Nuget Package in Visual Studio

One small disadvantage of this approach is that the package contains a lot more than just the System.Data.SQLite assembly. It also contains an assembly for the Entity Framework and will even install the Entity Framework (the object-relational mapper from Microsoft) if it's not part of the project already. There is no way to select the optional components.

So, if you still prefer the direct download, the download address is https://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki

The download page contains a lot of (useful) information. But maybe too much if you are just interested in downloading one assembly file. What you usually want to download for the .Net Framework are Precompiled Binaries for 64-bit Windows (.NET Framework 4.6). You have to scroll down to the last entries. Don't worry if you are using .NET Framework 4.7 or 4.8. The version numbers don't matter in this case.

The zip files contain a lot of useful files, one of them is test.exe for testing the provider with a small database test.db which is also part of the zip file. But the only file that is necessary for your application is System.Data.SQLite.dll.


TIP: I recommend using the 64-bit version (I could not get the 32-bit version to work on a 64-bit Windows). But this also means setting the platform to "x64" in Visual Studio project properties (if it does not compile on the first attempt, don't give up, it will work;).


Alt The download page for the SQLite .Net provider Fig 16.5: The download page for the SQLite .Net provider

Why SQLite?

The big advantage of SQLite is its simplicity in combination with a powerful SQL dialect that complies more or less with the "official" SQL Standard. The data provider for .Net consists of a single assembly that is either installed as a Nuget package or has to be downloaded from the project website. There is no need for an installation that might require administrator privilege or might not be possible due to some peculiar circumstances.

SQLite is not a general DBMS, it's only suited for single-access scenarios where an application needs a database that can be accessed with standard SQL syntax. The database is always a single file.


NOTE: SQL Server Compact from Microsoft is not an option although it is also a file-based DBMS. It had been deprecated by Microsoft many years ago.


Otherwise, I could have chosen any DBMS type like MS SQL Server, Oracle (both offer a free edition), a VO database, a FoxPro dbf, or even a dbc file. An Excel workbook would be possible too. For accessing all these "legacy" file-based databases, the data provider is either the built-in OLE DB or ODBC provider.

It's important to understand that except for the connection string (and some differences in the SQL syntax of course), there is no need to adapt the program code to a specific DBMS.

Choosing the right connection string

The connection string is, as the name implies, a string that contains all the information necessary for connecting to a specific database. The keywords are not standardized by Microsoft, they are defined by the developer of the used data provider (in the case of the build provider it is Microsoft of course)

This is a connection string for accessing an MS SQL Server database

Data Source=.\SQLExpress17;Initial Catalog=TestDb;Integrated Security=SSPI

IntegratedSecurity=SSPI means that the authentication is done with the credentials used for the Windows logon.

This is the connection string for accessing an Oracle database

Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oravm2)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=Eureka)));User Id=dbAdmin;Password=dbAdmin

This time, both the username and the password are part of the connection string which is not considered as best practice of course.

Finally the connection string for accessing a local SQLite database which filename is Mondial.db

Data Source=.\Mondial.db

It is as simple as that. This time there are no usernames and passwords because the database is not password-protected.

Connection strings are not case-sensitive, but blanks are important.


NOTE: It's possible to encrypt the connection string completely with little effort and a key. Please search the Microsoft documentation for a description with an example.



NOTE: Beware that the SQlite provider does not raise an error when the database file does not exist. It simply creates a new file. Since that file is empty, it might take a while to catch that error. The extension does not have to be .db.



TIP: For many years (actually several decades by now) the website https://www.connectionstrings.com contains a wealth of connection strings for all kinds of DBMS types.


Keeping the connection string outside the source code

It's a good idea to not write the connection string into the source code. The best place, from my perspective, is putting it in the config file of the application. To make it possible, a reference to System.Configuration.dll has to be included in the project.

The first step is to add a config file if it's not already there. In Visual Studio, add an App.config template. Outside Visual Studio, create a text file with the name of the exe file and a ".config" extension, like XS_HelloAdoNet.exe.config.

The naming is important, otherwise, the .Net runtime won't recognize the config file. And it has to be in the same directory (although it is possible to change the path).

Put the following content in the config file:

<configuration>
  <connectionStrings>
    <add name="mon"
      connectionString = "Data Source=.\mondial.db"
    />
  </connectionStrings>
</configuration>

As usual, the name of tags is case sensitive. "mon" is just a keyword for retrieving the connection string later. If the namespace System.Configuration is known, and the following code retrieves the connection string:

var conStr := ConfigurationManager.ConnectionStrings["mon"]:ConnectionString

The following example uses this technique for getting some data from the Mondial database and printing it out in the console.

**Example 16.1: (XS_HelloAdoNet.prg) **

// A first example for how to query data with Ado.Net

using System.Configuration
using System.Data
using System.Data.Sqlite

Function Start() As Void
   var conStr := ConfigurationManager.ConnectionStrings["mon"]:ConnectionString
   var sqlCmd := "Select * From Country Order by Name"
   var da := SqliteDataAdapter{sqlCmd, conStr}
   var ta := DataTable{"Countries"}
   var retVal := da:Fill(ta)
   ?i"{retVal} rows retrieved"
   ForEach row As DataRow in ta:Rows
    ?i"{row[""name""]} has a population of {row[""population""]}"
   Next

All other samples in this chapter use an "embedded" connection string.

Database access the .Net way

When the .Net Framework 1.0 became official in 2002, many experienced developers were shocked when they found out that .Net does not have any equivalent to the traditional cursor model. The reason was that the "fathers" of the .Net framework had something completely different in mind. They were designing their new runtime for an environment where most of the data would be delivered by web services in the XML format. The traditional database model, where a database server exists on the local network or even on the same computer and there is always an open connection was not taken into account.

A database query always should follow a simple pattern:

  1. Open the connection
  2. Query or update the database
  3. For a query hold the result either in a DataTable or store it in a variable.
  4. Close the connection

Since after querying the data, there is no connection to the database anymore, any changes to the database by other applications or users would not be detected by the application.

All inserts, updates, and deletes are exclusively accomplished by SQL statements. The syntax of each statement depends on the provider that encapsulates a DBMS with a general set of classes.

According to the .Net documentation, the following providers are already part of the .Net Framework runtime:

  1. Microsoft SQL Server
  2. OLE DB
  3. ODBC
  4. Oracle
  5. EntityClient
  6. SQL Server Compact 4.0

The provider for Oracle DBMS should not be used if special Oracle features like Ref Cursors are important. A better alternative is the compatible provider that Oracle is providing for free through its developer portal (registration is required)l.

That does not mean of course that DBMS types like FoxPro, MS Access, or DB2 cannot be used with the .Net runtime. ODBC and OLE DB are always the smallest common denominator. Even a dBase dbf file can be read and written through the built-in ODBC classes. For popular DMBS like MySql or PostgreSQL companies like DevArt are offering .Net Data Providers of high quality that are very good and affordable.

Since a custom installed .Net data provider has to be registered either in machine.config, in the config.file of the application or by code, it's simple to list all machine-wide registered providers with the help of the DbProviderFactories and its GetFactoryClasses() method that returns a DataTable with a row for each provider. An example will be shown at the end of this chapter.

The role of ADO.Net

Let's get this out of the way as soon as possible. When the .Net Framework was planned in the late 90s, the marketing team at Microsoft decided that the "Data Access Layer" should sound familiar to the many VB and C++ developers who already knew the OLE DB-based version of ADO for many years. So this part of the .NET runtime was labeled "ADO.Net". Besides the name, it has nothing to do with the classic ADO. Neither does it contain a Recordset object nor is it compatible with ADO by any means.

ADO.Net is more or less an alias for several components of the .Net Framework:

  • The data provider component
  • The extensible model that allows to register additional providers
  • The provider-specific classes that each data provider provides
  • A set of provider independent classes in the namespace System.Data and System.Data.Common.

TIP: David Sceppa (who was a member of the developer team at Microsoft at this time) had written a very good about ADO.Net. Chapters 11 and 12 about writing data back to the DB and the resulting concurrency issues are especially helpful. The chapter might have too much emphasis on MS SQL Server, but that's okay because most of the information in this book applies to any DBMS (and it might have something to do with the company, the author worked for at this time;) Although the book is from the year 2005 and therefore almost 18 years old, it is still useful in 2023 because only a very few things have changed in ADO.Net since then (if at all). You will find this book at used booksellers like Abebooks and many others and may be at a flea market in your neighborhood too. Highly recommended.

"Programming Microsoft ADO.NET 2.0 Core Reference"

There is also a German version of this book published by Microsoft Press that is just called "ADO.NET 2.0".


Querying data

Querying data with .Net and X# is simple and straightforward by using a Select statement or a Stored Procedure.

They are two means to get data out of any database:

  1. With a DataReader
  2. With a DataAdapter and a DataTable or a Dataset

Whereas a DataReader always delivers a forward-only/read-only reader, the DataTable, which is filled up by a DataAdapter, offers a familiar row/column model. So it's more like a two-dimensional array (with many extras).


Important: There is no equivalent to a classic Recordset (with an active record) in .Net. Neither in C# nor in any other .Net language. The closest thing to a record set is a BindingSource in combination with a DataTable. The BindingSource class (namespace System.Windows.Forms) offers a Current property and methods like MoveFirst() and MovePrevious().


But since it has no connection to any DBMS there is neither an automatic "save" to the database nor is the BindingSource able to detect any changes made in the database since the rows had been retrieved. It's completely "offline" like a DataTable or an ordinary array.

Querying data with a DataReader

A DataReader is either based on a provider-specific class like SqlDataReader or on the provider-independent class DbDataReader. It will not be created directly but always through the ExecuteReader() method of a command object. A DataReader is a little special for several reasons:

  1. The next "record" is always retrieved by the Read() method that returns either True if there is a next record or False if there is no (more) record.
  2. There is no item property or method, all fields have to be accessed with type-specific methods like GetString(), GetInt(), or GetDateTime().
  3. Each of these Get-Methods accepts only a so-called "ordinal number" (probably for performance reasons). The ordinal number is like an index that starts with 0. Every field name has to be translated first into its ordinal number by calling GetOrdinal().

A DataReader always needs two ingredients:

  1. An open database connection
  2. Either a Select-Statement or the name of a stored procedure (SQL Server) or a Command object

There is a HasRows property to check if the reader returned any rows.

A DataReader is always tied to a specific command object. So it's not possible to reuse a DataReader if it is already tied to a specific command.

As already stated several times, the sole purpose of a DataReader is (as the name implies) to read data from a data source. There is no way to write data back.

A DataReader cannot be bound directly to an UI element or a data class. But it is possible (also not recommended due to performance issues) to load a DataTable with a DataReader.

A DataReader is formally described as a forward-only/read-only cursor. The internal pointer to the current "record" advances with every read operation. There is no way to get the total number of rows in advance without having to go through all the rows. Since there is no overhead needed for the housekeeping DataReaders are fast (but there is no big difference to a DataTable that is filled through a DataAdapter).

The following example queries data from the Mondial SQLite database that is used for all examples in this section. Each column does contains fields like 'Name', 'Population' etc.

**Example 16.2: (XS_SqliteDataReader.prg) **

// Getting data from a SQLite database with a SqliteDataReader
// compile with xsc .\16_SqliteDataReader.prg /r:System.Data.Sqlite.dll

using System.Data
using System.Data.SQLite

Function Start() as Void
   Local conString := "Data Source=.\mondial.db" As String
   Begin Using var cn := SqliteConnection{conString}
     cn:open()
     Local cmd := cn:CreateCommand() As SqliteCommand
     cmd:CommandText := "Select * From Country"
     Local dr := cmd.ExecuteReader() As SqliteDataReader
     While dr:Read()
        ? i"{dr.GetString(dr.GetOrdinal(""Name""))} - Population: {dr.GetDecimal(dr.GetOrdinal(""Population""))}"
     End While
   End Using

Handling null values

Handling null values with a DataReader is simple thanks to the IsDBNull() method of the DataReader class:

var dateValue := IIf(!dr:IsDBNull(dr:GetOrdinal("PurchaseDate")), (DateTime?)dr:GetDateTime(dr:GetOrdinal("PurchaseDate")), (DateTime?)Null)

The variable either gets the value of the field PurchaseDate or a DateTime? value with a null value. Without this precaution a DBNull value in the field PurchaseDate would result in an exception like an InvalidCastException.

There are more or less three schools of thought:

  1. Don't worry about null values. If a field value is null it's because the database is not consistent. It's the responsibility of the user to fix it.
  2. It's all handled by exception handling.
  3. Handle the null value for all fields that might have null values.

I prefer option 3. But whatever you may prefer, you have a point too.

The only drawback is that you might have to write a lot more code if each field access is guarded by IsDBNull().

The following example defines an extension method GetDateTimeEx() for the SqlDataReader class that does the null handling so the code becomes a little shorter and better readable.

**Example 16.3: (XS_SqliteDataReaderNull1.prg) **

// Querying potential DBNull values with the IsDBNull() method
// compile with xsc .\16_SqliteDataReaderNull1.prg /r:System.Data.Sqlite.dll

using System.Data
using System.Data.SQLite

Function Start() as Void
   Local conString := "Data Source=.\booklib.db3" As String
   Begin Using var cn := SqliteConnection{conString}
     cn:open()
     Local cmd := cn:CreateCommand() As SqliteCommand
     cmd:CommandText := "Select * From BookCopy"
     Local dr := cmd.ExecuteReader() As SqliteDataReader
     While dr:Read()
        ? i"{dr.GetString(dr.GetOrdinal(""Book""))} - Purchase date: {dr.GetDateTime(dr.GetOrdinal(""PurchaseDate""))}"
     End While
   End Using

Of course, it would be much more elegant if the extension method would use a generic type so it could handle any field type but I have not figured out to accomplish this.

Querying data with a DataTable (or a DataSet)

Pulling the rows from a database query into a WinForms or WPF application is usually done by filling the rows into a DataTable with the help of a provider-specific DataAdapter. The process is really easy and always consists of three steps:

  1. Instantiating a DataAdapter with a Select command and a connection string.
  2. Instantiating a DataTable
  3. Calling the Fill() method of the DataAdapter.

The Fill() method returns the number of rows that were read with the query.

The following examples again query the Mondial SQLite database. This time it uses a SqliteDataAdapter to fill the retrieved rows in a DataTable.

**Example 16.4: (XS_SqliteDataAdapter.prg) **

// Getting data from a SQLite database with a SqliteDataAdapter and a DataTable
// compile with xsc /r:System.Data.Sqlite.dll

USING System.Data
USING System.Data.Sqlite

FUNCTION Start() AS VOID
   LOCAL conString := "Data Source=.\mondial.db" AS STRING
   LOCAL sqlText := "Select * From Country" AS STRING
   LOCAL da := SqliteDataAdapter{sqlText, conString} AS SqliteDataAdapter
   LOCAL ta := DataTable{} AS DataTable
   VAR rowCount := da:Fill(ta)
   ?i"{rowCount} rows loaded"
   // Primary key is not available
   ?i"Primary key: {ta:PrimaryKey[0]}"
   FOREACH row AS DataRow IN ta:Rows
      // ? row["Population"]:GetType():Name
      ? i"{row[""Name""]} - Population: {row[""Population""]}"
   NEXT

It might seem a little unusual at first sight how a single value from a row is retrieved:

row["Population"]

or

? i"{row[""Name""]} - Population: {row[""Population""]}"

The field is selected either by its name as an index or by a DataColumn object. There is no Get() method, an items property, or something similar to retrieve the value of a field from a row. But each value is typed. The Population value is of type Decimal for example.

Not having a property for accessing each field makes debugging a little inconvenient since a row is a single object. I recommend using the ItemArray property that contains all the field values as an array. Not perfect either but at least something to inspect the whole content of a row.

Alt During debugging the ItemArray  of a row property is helpful Fig 16.6: During debugging the ItemArray of a row property is helpful

Reading the primary keys

If a database table contains primary keys, they are not added to the DataTable automatically unless the MissingSchemaAction property of the DataAdapter is set to MissingSchemaAction.AddWithKey:

var da := SqliteDataAdapter{sqlText, conStr}
da:MissingSchemaAction := MissingSchemaAction.AddWithKey
da:fill(ta)

If the table definition defines one or more primary keys, they are now part of the PrimaryKey property of the DataTable.

Example 16.5: (XS_DataTableAddPrimaryKey.prg) ** You will find the source code in the repository

And what about a DataSet?

A DataTable holds its data completely in memory. It's organized in rows and columns and completely independent from the data source. It doesn't matter where the data originally came from. It's no problem at all to add additional rows at any time with arbitrary content. Another class in the namespace System.Data is DataSet. It contains one or more DataTables and it is possible to create parent relationships between tables. It can directly load XML data and it allows schema validation which is always optional. Because of its capabilities, the DataSet is called an " in-memory cache of data".

An excerpt from the Microsoft documentation explains the DataSet very well:

The Data represents a complete set of data that includes tables, constraints, and relationships among the tables. Because the DataSet is independent of the data source, a DataSet can include data local to the application, and data from multiple data sources. Interaction with existing data sources is controlled through the DataAdapter.


NOTE: I have only seldom used a DataSet in a project in recent years. Especially a "typed DataSet" where the column data is typed. A DataTable is sufficient in most of the cases.


Processing XML

There was a time when the Extended Markup Language (XML) was seriously considered as an alternative to any proprietary binary format used by any DBMS. Although MS SQL Server has a XML data type since version 2005, I don't think that this option ever played a significant role (on the other hand, what do I know;). Although XML was a big topic when the first version of the .Net Framework was released, there is no data provider for reading and writing XML data. But, reading and writing XML is very simple with a DataSet. Schema validation included. As soon the data is loaded into a DataTable of the DataSet, the original format does not matter anymore and the data can be transferred to a DBMS.

Executing SQL commands

It always seems a little strange for me as German to use the verb "to execute" when I want to say "do something with a command" (I don't like to "kill" programs either, but that is another topic). But "executing a command" is an official term and the DataReader offers a ExecuteNonQuery() method.

Whereas Select commands return either a DataReader or are used to fill a DataTable, ExecuteNonQuery() just returns a number that states the number of records affected by that command.

The easiest way to get a new command object is by using the CreateCommand() method of the connection object. The small advantage is, that the Connection property is already set:

var cmd := cn:CreateCommand()
cmd:CommandText := "Select * From Book"

A connection has to be opened with the Open() method and closed with the Close() method later:

Begin Using var cn := SQliteConnection{cnStr}
   cn:Open()
   var cmd := cn:CreateCommand()
   var id := 1000
   cmd:CommandText := "Select * From Book Where Id=" + id:ToString()
End Using

Thanks to Begin Using the Close() call can be omitted.

Too many open connections can be a performance issue, but since connection pooling is active by default with MS SQL Server and Oracle, this should not be a problem in most cases (on the other hand, I am far from being an expert on the complex topic of database performance).

There are several options for creating SQL statements with and without parameters:

  1. By concatenating the SQL statement with its parameters as a regular string
  2. By using string interpolation
  3. By using a special string builder class that makes dealing with parameters a little easier and less error-prone
  4. By using parameterized queries which is the recommended way

Option 1: Concatenating a SQL string

Although it's probably not the most convenient option, concatenating a SQL statement as strings is simple and will probably always achieve its goal. Especially since string interpolation was improved with a more recent version of the X# compiler.

The following examples concatenate a SQL Insert command that inserts a new row into the Airport table of the Mondial database.

var sqlText := "Insert Into Airport (IATACode,Name,Country,City,Latitude,Longitude,Elevation) "
sqlText += "Values('EXY','Flugplatz Roemerstraße','D','Esslingen',"
sqlText += "48.7433425,"
sqlText += "9.3201122,"
sqlText += "241)"

Example 16.6: (XS_SqlInsertWithConcatenate.prg) ** You will find the source code in the repository

A big disadvantage of creating a SQL command by concatenating is the danger of SQL injection. Everybody who knows at least a little bit of application security will tell you that. Whenever it's possible, you should use parameterized queries where it is impossible to inject additional conditions or commands into a Select-Statement.

I don't think it is an issue with a "closed application" that only known users can use and where there is no way of injecting something into an internal SQL query. But for following the rules and guidelines, I recommend using a parameterized query for such applications too.

SQL injection? I somewhere heard this before

Although the topic SQL injection appears very old (the first appearance of that term was in 1998), it's still relevant today and it probably will be for the foreseeable future. SQL Injection means, that an evil actor can "inject" something into an otherwise harmless SQL query string. This something could be either a second query condition that is always true or another SQL statement that deletes records for example. The sad consequence is, that another database query or even operation is executed.

The classic example is typing " or Username='*'" in the login field for the username inside of a web page.

Instead of

Select * From User Where Username='Rolf'

the query statement sent to the DBMS will be

Select * From User Where Username='Rolf' or Username like '%'

Everybody who knows a little about SQL will bet his 52" monitor on the assumption that this query returns all rows from the User table.

But there is more danger. What about adding a Delete statement so that a harmless query turns into a

Select UserName From User Where Username='inputField';Delete * From User

Without (parameter) validation, the database access layer will probably execute this command like any other command.

The attacker doesn't have to know anything about the database schema. Due diligence, some insider information, or just plain "luck" can be enough.

Of course, this will only make sense within a web application where the user names and other information about a user might even appear as part of the returned HTML content.

SQL Injection made many attacks possible where the bad actors needed very little knowledge. The only requirements were to be able to use a browser and some criminal energy. Today, all the major websites should be safe, but I won't bet a lot of money on that assumption.

It's hard (at least for me) to estimate the potential danger of SQL injection for a typical desktop application. Although the probability of a successful attack is very low or even zero in most circumstances, my recommendation is to use query parameters whenever possible. How this can be done will be described in one of the following sections.

Option 2: Using string interpolation

String interpolation is a simple but powerful technique for inserting values into a string without the hassle of counting apostrophes and using + signs.

The following example places the values of variables into an SQL insert command through string interpolation that inserts a new row into the Country table of the Mondial database.

var country := "JavaLand"
var capital := "Bean City"
var code := "Java"
var area := 0
var population := 1
var sqlText := "Insert Into Country (Name,Code,Capital,Area,Population) "
sqlText += i"Values('{country}','{code}','{capital}',{area},{population})"

Example 16.7: (XS_SqlInsertWithStringInterpolation.prg) ** You will find the source code in the repository

Option 3: Using the StringBuilder class

A "StringBuilder" is a class with that name in the namespace System.Text. Because concatenating a string in .Net always means that a new string is created in memory and the old string is copied into the new string which can be (extremely) time consuming, inside a loop strings should always be concatenated that way. The final string is retrieved by the ToString() method.

The following example uses a StringBuilder for constructing an SQL insert command that inserts a new row into the Airport table of the Mondial database.

var sb := StringBuilder{}
sb:Append("Insert Into Airport (IATACode,Name,Country,City,Latitude,Longitude,Elevation) ")
sb:Append("Values('EXY','Flugplatz Roemerstraße','D','Esslingen',")
sb:Append("48.7433425,")
sb:Append("9.3201122,")
sb:Append("241)")

Example 16.8: (XS_SqlInsertWithStringBuilder.prg) ** You will find the source code in the repository

Option 4: Use parameterized statements

SQL queries that are based on plain strings can be very dangerous because of SQL Injection.

The following examples show the same Insert statement that was used for one of the last examples but this time with parameters:

cmd:commandText := "Insert Into Country (Name,Code,Capital,Area,Population) Values (@name,@code,@capital,@area,@population)"
//Add all the parameters with their values (no @ before the name)      
cmd:Parameters:AddWithValue("name", "Funnyland")
cmd:Parameters:AddWithValue("code", "fun")
cmd:Parameters:AddWithValue("capital", "USA")
cmd:Parameters:AddWithValue("area", 0)
cmd:Parameters:AddWithValue("population", 123)

Usually, there is no need to specify both a type or a direction for each parameter. The name does not have to be identical to a field name either. But the @ has to be omitted.

The whole statement would be like this:

Insert Into Country (Name, Code, Capital, Area, Population) Values (?,?,?,?,?)

Example 16.9: (XS_SqlInsertWithParameters.prg) ** You will find the source code in the repository


NOTE: The syntax for a parameter name is provider-specific. There are small differences between an MS SQL and an Oracle database for example. Most providers, if not all, also support a ? as an anonymous parameter name within the SQL statement. In this case, the name of the added parameter does not matter.


Hardening SQL statements so that SQL injection won't be possible, by using parameters for example, for a desktop application It's like checking the tire pressure before going on a long journey with your car. It won't be necessary in 99% of all cases but you have a better feeling of safety and drive with a clear conscience. And if something bad happens, you are not to blame for being careless.

Using ExecuteNonQuery() for Delete-, Insert- and Update-Commands

Every SQL statement that does not return rows is "executed" through the InvokeNonQuery() method of a DbCommand (this is just the generic term for a provider-specific command object). The method does not have parameters but a return value. Usually, it is the number of rows affected by that command.

The following example just inserts a new row into the Book table:

Local sqlText := "Insert Into Book (AuthorId, title, pages) Values(@authorId,@title,@pages)" As String
Begin Using var cn := SqliteConnection{cnStr}
   var cmd := cn:CreateCommand()
   cn:Open()
   cmd:CommandText := sqlText
   cmd:Parameters:Add(SqliteParameter{"authorId","pemo1"})
   cmd:Parameters:Add(SqliteParameter{"title", "The great book about nothing"})
   cmd:Parameters:Add(SqliteParameter{"pages", 1})
   var retVal := cmd:ExecuteNonQuery()
End Using

Getting single values with ExecuteScalar()

One of the most popular questions on any forum that has database programming as its topic is: "How do I get the value of an autoincrement field after an Insert?" The answer is, it depends. It depends on the DBMS type. With .Net it usually means to query the result with another query. It's not automatically returned by InvokeNonQuery().

For SQLite the recommended way for querying the value of the Id field after an Insert is to query the special table sqlite_sequence. Since this query only returns a single value, the ExecuteScalar() method is the recommended way:

cmd:CommandText := "Select seq From sqlite_sequence Where name = 'Book'"
var newId := cmd:ExecuteScalar()

The following example uses the Book table of the BookLib SQLite sample database for this chapter. Its Id field is an autoincrement field.

Example 16.10: (XS_SQLiteInsertWithAutoinc.prg) ** You will find the source code in the repository

Method Meaning
ExecuteReader() Returns a DataReader as a forward-only/read-only cursor for getting the next row with every read.
ExecuteNonQuery() Executes an SQL statement that does not return rows.
ExecuteScalar() Returns a single value (type Object) from a Select query.

Tab 16.1: The important methods of the DbCommand class

Updating the database

The most important fact right at the start of this topic:

All updating is done by SQL statements or (Stored Procedures) with no exceptions

This is in stark contrast to the way DBServer does updates where all it takes is not to forget the Close() method after one or more records have been appended, deleted, or changed.

The only choices .Net developers have is how the SQL statements are constructed and if instead of SQL statement a predefined stored procedure or any equivalent is used.

The second most important fact is (maybe) that fetching the data is completely independent of updating the data later. If the data had been fetched through a DataReader there is no way back. If some of the data that was pulled through a DataReader into the UI should be written back to the database, this had to be done by sending an Update command for each row with the InvokeNonQuery() method of a newly created DbCommand object.

If the data was filled into a DataTable through a DataAdpater, the DataAdapter has a Update() method. But before this method can be called, the needed SQL statement has to be provided including all the needed parameters.

This is not as complicated as it may sound. The many examples in this chapter will prove me right hopefully. But before I show the different alternatives (two actually) for updating the database the .Net way, a short discussion about locking is necessary.

Locking - optimistic and pessimistic

ADO.Net offers two types of locking:

  1. Optimistic
  2. Pessimistic

Optimistic locking is (for me) a kind of misnomer because there is no locking involved. It just means that data is queried from the database, kept in memory in a DataTable (or DataSet) for updating, inserting, and deleting rows, and is written back to the database as soon as possible to reduce the probability of a DBConcurencyException. The general assumption is, that in this scenario it is not very likely that another user has queried and changed the data in the meantime. Optimistic locking is the default and therefore there is absolutely nothing to do to enable optimistic locking. Just follow the rules.

Pessimistic locking on the other hand means, that a record is locked with the means of the DBMS when the data is queried by a SELECT. Because of the lock, the data cannot be changed by another user or process. It is a DBMS-specific technique that is generally not recommended and can be, to quote book author David Sceppa, "very dangerous".

A personal note: I have never used pessimistic locking in my 20+ years of programming with the .Net Framework.

How to detect conflicts?

This section will only give a very general description with a few examples. For an in-depth discussion please use the Microsoft documentation, which is very good, or read the two chapters of the very good book by David Sceppa. Although every code both in the Microsoft docs and in the book is written in C#, all the described techniques can be used 1:1 with X#.

Let's assume a few rows had been queried from the database by using the Fill() method of a DataAdapter. They are now part of a DataTable (which has no connection to the database as you know by now). Let us further assume, that the queried table has no primary key. Now, two or three rows have been changed in the DataTable through data binding. The RowState of each of these rows changed from Unchanged to Modified.

Now what? The Update() method of the DataAdapter cannot be called because there is no primary key involved. This leaves us with no other choice than to put the UPDATE command together by hand.

Putting an SQL statement together by string concatenation seems to be easy at first glance, but what about the Where clause? Since there is no primary key, it has to use all the fields to exactly match the record in the database. But what about the values? For a comparison, we need the original values and not the modified values for each column (field).

Luckily, a DataRow offers a simple solution. For every column, it stores four different versions of the value of that column:

  • Current
  • Default
  • Original
  • Proposed

The constants are part of the System.Data.DataRowVersion enum.

Please refer to the Microsoft documentation for a detailed explanation of each value. In most scenarios, only the versions Original and Current matter, where Original is the value before and Current is the value after the value of a column has changed.

Since the existence of a specific version depends on the state of the row, the DataRow has a HasVersion() method to query if a version state exists.

The following example queries the proposed version of a specific column only if that version exists:

if row:HasVersion(DataRowVersion.Proposed)
  ? i"current:= {row[""price"", DataRowVersion.Current]} proposed:={row[""price"", DataRowVersion.Proposed]}"
EndIf

For an update operation, only the version states Current and Original are of interest. The current version of each field is used for the Set part of the Update statement whereas the original version of each value has to be part of the Where clause.

An Update statement auto-generated by a CommandBuilder for the Book table with its four field Id, Author, Title and PubYear will look like this:

UPDATE [main].[sqlite_default_schema].[Book] SET [authorId] = @param1, [title] = @param2, [pubYear] = @param3 WHERE (([id] = @param4) AND ((@param5 = 1 AND [authorId] IS NULL) OR ([authorId] = @param6)) AND ((@param7 = 1 AND [title] IS NULL) OR ([title] = @param8)) AND ((@param9 = 1 AND [pubYear] IS NULL) OR ([pubYear] = @param10)))

It might seem a little bit unusual that every field is included in the where clause. But that is the only way to find out if the row to be updated has changed since that row was retrieved by the Select statement of the DataAdapter. If that is the case, the Where clause is not fulfilled the Update statement won't update and an Update will fail with a DBConcurrencyException.

A DBConcurrencyException is just an indication that an update was not possible but is now a solution. How the conflict is solved is 100% up to the developer. One solution could be presenting the conflicting values to the user and asking for a choice: Overwrite or not overwrite. Another solution could be to overwrite the existing and changed values with the new values without any prompting. It's really up to you.

Updating data with a DataAdapter

Querying data with .Net and a DataAdapter is simple. The concept of a DataAdapter might seem a little unusual at first glance, but the programming is simple. Updating modified rows through a DataAdapter is more challenging. Although it may look like the simplest thing in the world because the DataAdapter offers an Update() method. So, why don't we call this method with the same DataAdapter that was used to query the data? Because the needed Delete, Insert, and Update commands don't exist yet. They have to be created first.

There are two options for "preparing" a DataAdapter for being able to update a database:

  1. By using a CommandBuilder
  2. By assigning the DeleteCommand-, InsertCommand- and UpdateCommand-property of the DataAdapter the needed command object.

Although using a "builder" may sound like a convenient choice, Option 2 is the better one.

So, we have to set up the DataAdapter by assigning each of the needed update commands. We need SQL commands. But first, let's look at how a CommandBuilder would auto-create these commands.

Using a CommandBuilder class

First, there is no class with the name CommandBuilder. Since this class is always part of the DBMS-specific provider namespace, the name will vary. For MS SQL Server its name is SqlCommandBuilder, for SQLite it's SqliteCommandBuilder. And what's the name of Oracle? Right, OracleCommandBuilder. The name may change but the "API" (the name and purpose of its properties and methods) is always the same.


NOTE: There is also a DbCommandBuilder class in the namespace System.Data.Common. This is a provider-independent class and therefore very useful. I will not cover this class because there is little preparation necessary for using the provider's independent classes.


The purpose of this class is to automatically generate a DELETE, INSERT, and UPDATE SQL statement based on a SELECT statement that is provided through an existing DataAdapter instance.

Instancing a CommandBuilder couldn't be easier. All it takes is an instance of a DataAdapter.

var cb := SqliteCommandBuilder(da)

Why a DataAdapter? Because an initialized DataAdapter always has a Select-Command from which the CommandBuilder can derive the table schema.

There is no overloaded constructor so there is no way to influence the way the CommandBuilder builds the commands. The result is three command objects with an attached SQL statement that can be queried with the methods GetDeleteCommand(), GetInsertCommand() or GetUpdateCommand(). Each of these methods returns a SqliteCommand object because a SqliteCommandBuilder was used.

The following example should be called "playing with a CommandBuilder" since it serves no real purpose. It instantiates a SqliteCommandBuilder and prints the text of the resulting Delete, Insert-, and Update commands and the number of parameters each command has.

**Example 16.11: (XS_SqliteCommandBuilder.prg) **

// Example for the SqliteCommandBuilder in action
// use xsc .\16_SqliteCommandBuilder.prg /r:System.Data.Sqlite.dll

using System.Data.Sqlite

Function Start() As Void
    var cnStr := i"Data Source=.\mondial.db"
    var cmdText := "Select * From Country"
    var da := SqliteDataAdapter{cmdText, cnStr}
    // its not necessary to actually retrieve records from the database
    // lets build a commandBuilder
    var cb := SqliteCommandBuilder{da}
    ? cb:GetDeleteCommand():CommandText
    var paraCount := cb:GetDeleteCommand():Parameters:Count
    ? i"{paraCount} Parameters"
    ? cb:GetInsertCommand():CommandText
    paraCount := cb:GetInsertCommand():Parameters:Count
    ? i"{paraCount} Parameters"
    ? cb:GetUpdateCommand():CommandText
    paraCount := cb:GetUpdateCommand():Parameters:Count
    ? i"{paraCount} Parameters"

It's interesting to note that to use the SqliteCommandBuilder for generating a Delete-, Insert-, and Update-Command no data has to be retrieved from the database.

There are three important restrictions for using a CommandBuilder in general (and I quote from the book by David Sceppa):

  1. The query returns data from a single table only
  2. The table has to have a primary key
  3. The primary key is included in the query result

If one of these requirements is not met, the CommandBuilder cannot be used. This does not mean though, that the Update method of the DataAdapter cannot be used. It means, that we as the developers have to provide the needed SQL statements directly.

The real purpose of a CommandBuilder is to make the Update() method of a DataAdapter work for standard scenarios. It expects either a DataTable or an array of rows as an argument. Without a CommandBuilder, the UpdateCommand property of the DataAdapter has to get a command object by direct assignment. How this command object had been constructed doesn't matter. All it has to contain is either a valid SQL statement or the name of a Stored Procedure if the DBMS is MS SQL Server.

The next example does finally update a real database after a field in a row of a DataTable had been assigned a new value. This leads to a change in the row state of that row which is the condition the Update() method of the DataAdapter is checking to decide which kind of update has to be performed for the row: Delete, Insert, or Update.

Since the sample should not compromise the Mondial database, the example uses the BookLib.db3 database instead (which is also part of the samples directory). This database consists only of a few tables: Author, Book, and BookCopy with a few records.

**Example 16.12: (XS_SqliteDataTableUpdate.prg) **

// Example for a complete Update() method call of a Sqlite Data Adapter
// use xsc ./16_SqliteDataTableUpdate.prg /r:System.Data.Sqlite

using System.Data
using System.Data.Sqlite
using System.Linq

Function Start() As Void
   var conStr := "Data Source=.\BookLib.db3"
   var sqlCmd := "Select * From BookCopy"
   var da := SqliteDataAdapter{sqlCmd, conStr}
   var ta := DataTable{"Books"}
   var retVal := da:Fill(ta)
   ?i"{retVal} rows retrieved"
   // Prepare the SqliteCommandBuilder
   var sqlCb := SqliteCommandBuilder{da}
   // print out the old prices
   ? "*** Current prices ***"
   For Local i := 0 UpTo ta:Rows:Count - 1
     ?i"current price={ta.Rows[i][""price""]}"
   Next
   // Increase the prices of every second book by +25%
   ForEach row As DataRow in ta:Rows
     var priceVal := Double.Parse(row["price"]:ToString())
     row["price"] := priceVal * 1.25
     ? i"original = {row[""price"", DataRowVersion.Original]} current:={row[""price"", DataRowVersion.Current]}"
  Next
   // print out the new prices
   ? "*** Updated prices ***"
   ForEach row As DataRow in ta:Rows
     ?i"current price={row[""price""]}"
   Next

   // get the number of modified rows before AcceptChanges() sets them back to Unchanged
   var modifiedCount := (from r as DataRow in ta:Rows Where r:RowState == DataRowState.Modified Select r):Count()

   // Accept all changes in the DataTable
   ta:AcceptChanges()

   // Now its a simple a call of the Update() method
   // Because of AcceptChanges() every row is in RowState = Unchanged to Update() returns 0
   da:Update(ta)

   ? i"{modifiedCount} rows updated"

Updating a database without a DataAdapter

The best solution for me for implementing optimistic locking is to use a DataAdapter for filling a DataTable, but not use the *Update() method but instead to execute independent SQL commands for updates, inserts, and deletes for each modified, added and deleted row.

Writing back data that is inside a DataTable (or a DataSet) means enumerating the rows of each DataTable and checking the state of its row to detect if its content has been changed - usually through data binding. The row check reveals if the row has been changed, deleted, or inserted. Depending on the outcome for each row, an Update-, Delete- or Insert-Command is put together and executed through the ExecuteNonQuery() method of a command object.

The benefits are, that the whole update process becomes simpler, better controllable, and independent from the restrictions of the DataAdapter.

The following example loads the content from the *BookCopy table, changes the field description and price of each row, and writes everything back to the database with optimistic locking.

Since only two fields of the BookCopy table will be affected, the Update statement is short. Deletes and Inserts are not considered.

Example 16.13: (XS_SqliteDataTableUpdateWithoutCmdBuilder.prg) ** You will find the source code in the repository

Does this sound complicated? Too complicated, especially considering how simple database access used to be with DBServer? Yes and no. Yes, because it took me quite a while to understand all this (many years ago). No, because after you have an understanding of the basic mechanism it's not complicated at all. You'll lose a lot of comfort in direct comparison with the way DBServer handles the database access with its "always online connection", but you also gain more freedom and flexibility and you are in control of all the database updates. No more black boxes.

Another important point to consider. The described pattern is more than 20 years old and it is still the only data access pattern in both the .Net Framework and .Net. So the assumption has to be, that the vast majority of .Net developers are happy with it and are more than able to accomplish all their database tasks every day.

Working with a DataTable

This section is all about the DataTable class. Think about a DataTable as a flexible data structure with rows, typed columns, filters based on the expression for selecting rows, a merge functionality with other DataTables, and an XML import/export functionality. Most importantly: Think of it as a data structure that has nothing to do with a database. Therefore, there is no database involved for all (except for one) examples in this section.

Creating a new DataTable

Creating a DataTable means instantiating the DataTable class. A name is optional.

var ta := DataTable{}

Before rows can be added, there have to be columns (based on the DataColumn class) with a name and a type:

ta:Columns:Add(DataColumn{"Id", typeof(String)})

When all columns are defined, the NewRow() method returns a new and empty row:

var row := ta:NewRow()

The next step is to assign values to each column:

row["id"] := 1

If all fields are set, the new row is added to the Rows collection:

ta:Rows:Add(row)

A DataTable can contain as many rows as needed. To save the data inside a DataTable there are two options:

  1. Save the DataTable as XML with the WriteXml() method.
  2. Create a Insert Into SQL Command for each row and execute with the ExecuteNonQuery() method of a newly created DbCommand object.

But usually, storing a "free" DataTable in a database is not an option because DataTables are usually created from a database.

Example 16.14: (XS_DataTableCreate.prg) ** You will find the source code in the repository

Adding rows

How new rows are added to a DataTable was already mentioned in the last section. There are always three steps:

  1. Calling the NewRow() method
  2. Setting values for each column
  3. Calling the Add() method of the Rows collection property

It is also possible to add a row from another DataTable with the ImportRow() method.

Creating a new DataTable with constraints

A DataTable can have two kinds of constraints for each row:

  1. A primary key constraint
  2. A foreign key constraint
  3. A unique value constraint

A primary key can consist of several fields, therefore the PrimaryKey property of a DataTable is an array of DataColumn objects.

A primary key constraint is added like so:

ta:PrimaryKey := <DataColumn>{ta:Columns["Id"]}

Example 16.15: (XS_DataTableWithPrimaryKey.prg) ** You will find the source code in the repository

To check if a DataTable contains a primary key, testing the Length property is all it needs:

if ta:PrimaryKey:Length > 0

The other constraints are directly added to the Constraints property. A unique key constraint is added like so:

ta:Constraints:Add(UniqueConstraint{<DataColumn>{ta:Columns["KeyId"]}})

Example 16.16: (XS_DataTableWithContraints.prg) ** You will find the source code in the repository

If the value of the Id field of a new row violates any existing constraints, adding the row results in a ConstraintException.

Removing all rows

Removing all rows from a DataTable is easy thanks to the Clear() method.

Accessing rows and columns

The rows collection of a DataTable is accessed through an index. Each column can be accessed either by an index or by its name.

var row := ta:Rows[0]

?row[1]

row["Price"]

Example 16.17: (XS_DataTableRowsColumns.prg) ** You will find the source code in the repository

Editing columns

Editing the columns of a row means assigning the value directly like so:

row["Price"] := 1E6

But since the DataRow class offers a BeginEdit() and EndEdit() method, it's not always that simple. In a situation where a validation logic has to be applied before the new value becomes official, this proposed value can be accessed through the Proposed version of that row. If the validation is OK, the EndEdit() method ends the edit mode and the proposed value becomes the current value. Otherwise, the edit mode can be terminated through the CancelEdit() method.

AcceptChanges() and RejectChanges()

To close any open BeginEdit() calls and to turn every row into an "unchanged" state again, the AcceptChanges() method for the DataTable is called. To roll back any changes made since the last call to AcceptChanges() or the initialization of the DataTable the RejectChanges() method is called.

Beware of the effect that the methods AcceptChanges() and RejectChanges() have on each row of a DataTable or the complete DataTable.

Method Meaning
AcceptChanges The original value is set to the current value and the RowState will be changed to "Unchanged"
RejectChanges The current value gets the original value again and the RowState will be changed to "Unchanged"

Tab 16.2: The meaning of AcceptChanges() and RejectChanges()

Example 16.18: (XS_DataRowAcceptRejectChanges.prg) ** You will find the source code in the repository

Sorting a DataTable with LINQ

A DataTable has no sort method but thanks to LINQ sorting a DataTable becomes easy:

var sortedForPrice := (from row as DataRow in ta:Rows OrderBy row["Price"] Select row):ToList(

Example 16.19: (XS_DataTableSort.prg) ** You will find the source code in the repository

Finding rows by filtering

There is no Find method, but a DataTable offers a Select() method that returns all rows that fulfill a given expression. An expression can contain field names and operators. Please check the official docs for a complete overview of this very good summary:

https://www.csharp-examples.net/dataview-rowfilter/

In the following example select all rows with a field country that contains the value 'Sweden':

var rows := ta:Select("Country='Sweden'")

In the following example the Select() method call select all rows with a field country that contains the value 'Italy' and a field Price that has a value > 1.3E6:

rows := ta:Select("Country='Italy' and Price > 1300000")

Example 16.20: (XS_DataTableSelect.prg) ** You will find the source code in the repository

Finding rows with LINQ

LINQ is always an option for any kind of query and it might be both easier and more powerful for selecting rows by a criteria in the familiar X# syntax than the build-in expression syntax.

The next example also collects all rows from a DataTable with a Country field that has 'Sweden' as its content:

var rows := from row as DataRow in ta:rows where row["Country"]:ToString() == "Sweden" select row

Although it's for me still a kind of miracle that we can write LINQ queries in X# like this, the X# compiler might do a little better with type inference. The row variable should not need an explicit data type. But I think, it's not the compiler's fault. One explanation for the resulting error message

Could not find an implementation of the query pattern for source type 'System.Data.DataRowCollection'. 'Where' not found. Consider explicitly specifying the type of the range variable 'row'.

is, that System.Data.DataRowCollection does not implement the required IEnumerable interface.


TIP: A convenient and really fast way to find out what interfaces any .Net runtime type implements is using the PowerShell and typing in the following command:

[System.Data.DataRowCollection].GetInterfaces()

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     False    ICollection
True     False    IEnumerable

If the type is not found, the assembly has to be loaded first with the Add-Type command.

Let's try again with explicit casting:

var rows := from row in ta:rows:Cast<DataRow>() where row["Country"]:ToString() == "Sweden" select row

The power of LINQ combined with the infinite knowledge of StackOverflow;)

The following example combines two criteria in a LINQ query expression:

rows := from row as in ta:Rows:Cast<DataRow> where row["Country"]:ToString() =="Italy" && Double.Parse(row["Price"]:ToString()) > 1300000 select row

The explicit type conversion is necessary because row[Fieldname] always returns an object value.


TIP: To enjoy the benefits of LINQ in combination with a DataTable you would have to use the DataTableExtensions class. In the .Net Framework runtime, the class is part of the System.Data.DataSetExtensions.dll and not of System.Data.dll.


Example 16.21: (XS_DataTableLINQ.prg) ** You will find the source code in the repository

Loading and saving the content of a DataTable as XML

Last, but not least. The data content of a DataTable can be saved as XML (Extensible Markup Language) and an XML file can be loaded into a DataTable which is the simplest way to fill a DataTable with data.

The following examples save a DataTable as XML:

var xmlPath := Path.Combine(Environment.CurrentDirectory, "ExpensiveCars.xml")
ta:WriteXml(xmlPath)

Loading XML into a DataTable with the ReadXml() method seems too easy at first glance. But the following command does not work:

ta:ReadXml(xmlPath)

Using a stream does not change anything to be better:

Begin Using  var stream := FileStream{xmlPath, FileMode.Open}
   ta:ReadXml(stream)
End Using

In both cases the loading attempt results in an error message:

System.InvalidOperationException: DataTable does not support schema-deduction from Xml

The (simple and obvious) reason is that the ReadXml() method of the DataTable class expects an XML with a schema definition.

We have two options:

  1. Using the XmlWriteMode.WriteSchema argument with the WriteXml() method of the DataTable
  2. Use the ReadXml() method of the DataSet class and access the table by Tables[0].

Saving a DataTable in a database

A DataTable that was constructed and filled by code can be saved into an existing database by using a DataAdapter. It does not matter if the data originally came from that database or not. It could even have been queried from a completely different kind of data store. There is only one condition: It has to have a primary key. And of course, the database has to have an appropriate table for the Insert command.

For the next example, there is already a DataTable with a few rows. The first step is to create a DataAdapter with a connection string for an existing database (SQLite for this example):

// Create a DataAdapter
var cnStr := "Data Source=.\CarsDb.db3"
var sqlText := "Select * From Cars"
var adapter := SQLiteDataAdapter{sqlText,cnStr}

The next step is to create a CommandBuilder:

// Create a CommandBuilder
var cb := SQLiteCommandBuilder{adapter}

Again, this will only work if the DataTable has a primary key. Otherwise a Insert command had to be provided to the DataAdapter.

Now, all it takes is to call the Update() method of the DataAdapter to save the DataTable into the database:

var result := adapter:Update(ta)
? i"{result} rows stored in the database"

Alt A DataTable had been stored in a SQLite database Fig 16.7: A DataTable had been stored in an SQLite database

Example 16.22: (XS_DataTableInsertDatabase.prg) ** You will find the source code in the repository

Hononarable mention - the DataSet

One reason the DataSet class has not been used in this chapter is that it's not needed. The idea (in 2002 and before that when .Net Framework was designed) was that a DataSet should be a kind of database engine that is completely held in memory. To make a long story short, there are only three main differences between a DataSet and a DataTable:

  1. A DataSet can hold several DataTables
  2. A DataSet can use master-child-relations between two DataTables
  3. A DataSet is more flexible with saving and loading XML

Since none of these features have any importance, there is no need to use a DataSet at all.

The title for this topic is a little joke, of course. If there is a most popular DMBS in the world, it's probably not Microsoft Access. But if some mighty artificial intelligence would be able to count the number of all mdb files on all the computers in this world (especially of those who belong to any government agency), it might be the famous Jet Engine as the "database part" of Microsoft Access (which is a really good piece of software).

All jokes aside, accessing a mdb database with X# requires little more than using a few classes in the namespace System.Data.OleDb and an appropriate connectionstring like

Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test1234.mdb;Jet OLEDB:Engine Type=5

ODBC would also be an option but I won't get into that. The biggest obstacle might be finding the 64-bit version of Microsoft.Jet.OLEDB.4.0. Windows contains only the 32-bit version. If you don't have the 64-bit version installed, the X# program has to be compiled for x86.

The following example uses a small database SampleData.mdb that is part of the samples directory. It contains only a single table with a single row.

**Example 16.23: (XS_AccessRead.prg) **

// example for reading a MS Access mdb database file
// if only the 32 bit ole db provider is installed use
// compile xsc .\XS_AccessRead.prg -platform:x86 and ignore warning

using System.Data
using System.Data.OleDb

Function Start() As Void
   var mdbPath := "SampleData.mdb"
   var cnStr := i"Provider=Microsoft.Jet.OLEDB.4.0;Data Source={mdbPath};Jet OLEDB:Engine Type=5"
   try
      var sql := "Select * From Messwerte"
      var da := OleDbDataAdapter{sql, cnStr}
      var ta := DataTable{}
      ? da:fill(ta)
      ? i"{ta.Rows.Count} rows in the table"
   catch ex As OleDbException
      ? i"Error: {ex:Message}"
   end try

Creating a mdb file

It's also possible to start with a freshly created database. This is possible thanks to several COM objects like ADOX.Catalog and ADOX.Table. They are also part of the Microsoft.Jet.OLEDB.4.0 provider. So there is no need to install anything. The only drawback again is the fact that the application has to be compiled for x86.

The following example creates a mdb database with a single table but no data. Instead of using ADOX.Table the table could also be created with a CREATE TABLE statement. It uses late binding to access the COM object.

Also, I assume that only a very few readers will ever do this, it's another nice example of transferring an old technique from the 90s, used with many Visual Basic and VBA programs, to the 21st century and X#.

Example 16.24: (XS_AccessCreateDbWithTable.prg) ** You will find the source code in the repository


NOTE: It should be clear without much further explanation that early binding with COM objects is always a better option. In this case a reference to Microsoft ADO Ext. 2.8 for DDL and Security library in Visual Studio would be needed. The reason for using late binding in this example is that is simpler to compile.


The provider independent classes in the Namespace System.Data.Common

The first thing most developers starting with database access with X# will do is use the provider-specific classes. As soon as more DBMS types come into consideration the provider independent classes in the namespace System.Data.Common become interesting.

The classes in the namespace System.Data.Common offerings are a few advantages:

  • they are independent of a specific provider
  • They work with any provider that has been added to the .Net runtime

The namespace provides classes like

  • DbProviderFactory
  • DbConnection
  • DbCommand
  • DbDataReader
  • DbDataAdapter
  • DbParameter
  • DbException etc.

There is even a class with the promising name DbRecord. But it is not what it could be by its name.

It's important to understand that the classes just provide provider-independent functionality. They are NOT taking care of DBMS-specific differences like differences between two SQL dialects or placeholders for parameters in an SQL statement.

The classes offer the least common dominator. But it is also possible to provide provider-specific details.

Understanding the factory pattern

The most important class is DbProviderFactory. Its sole purpose is to "produce" other objects like:

  • a DbCommand object through the CreateCommand() method
  • a DbDataAdapter object through the CreateAdapter() method
  • a DbParameter object through the CreateParameter() method

There are a few other objects that a DbProviderFactory can produce but they are not so important so they won't be described here.

Opening a connection provider independent

For opening a connection the first step is to get a DbProviderFactory object. Two ingredients are needed for this:

  1. The connection string (as usual)
  2. The invariant name of the data provider (like System.Data.SqlClient for MS SQL Server).

NOTE: Whereas the invariant names of the standard providers are obvious, the invariant name of a provider for a not-so-common DBMS like PostgreSQL has to be looked up from either the documentation of that provider or the content of the machine.config file after the provider had been registered.


But don't worry, nobody has to search for the machine.config on her or his computer. With a few lines of code, the name and the invariant name of each registered provider can be queried.

Example 16.25: (XS_ListDataProviders.prg) ** You will find the source code in the repository

List all registered data providers

// List data providers

Using System.Data
Using System.Data.Common

Function Start() as Void
   var taProv := DbProviderFactories.GetFactoryClasses()
   ForEach row as DataRow in taProv:rows
    ? i"Name={row[""name""]} Invariant Name={row[""invariantname""]}"
   Next

**Example 16.26: (XS_ListDataProviders.prg) **

// List all registered data providers

using System.Data
using System.Data.Common

Function Start() as Void
   var taProv := DbProviderFactories.GetFactoryClasses()
   foreach row as DataRow in taProv:rows
     ? i"Name={row[""name""]} Invariant Name={row[""invariantname""]}"
   next

Alt A list of the available .Net database providers
Fig 16.8: A list of the available .Net database providers

Opening a connection to an MS SQL Server database

The following example opens a connection to an MS SQL Server database (you have to edit the connection string first because the used database does not exist on your computer). It will be automatically closed due to the Begin Using/End Using statement.

**Example 16.27: (XS_DBConnectionCreate_MSSQL.prg) **

// Opening a provider independent connection to MS SQL Server database

using System.Data
using System.Data.Common

Function Start() as Void
   Local providerName := "System.Data.SqlClient" As String
   Local conString := "Data Source=.\SQLExpress19;Initial Catalog=ef1;Integrated Security=SSPI" As String
   Local factory := DbProviderFactories.GetFactory(providerName) As DbProviderFactory
   Begin Using var con := factory:CreateConnection()
      con:ConnectionString := conString
      con:open()
      ? "*** Connection openend ***"
   End Using
   ? "*** Connection was closed ***"

Opening a connection to a SQLite database

Opening a connection is not different from opening a connection to any other DBMS except for the fact that the provider name of the SQLite for .Net provider is usually not known because it has not been registered yet and therefore is not part of the global Machine.config file. Although it would be possible to register it on the fly, a better alternative is to use the config file that can accompany any exe file.

The following commands won't work without that config file:

var providerName := "System.Data.Sqlite"
var conString := "Data Source=.\Booklib.db3" 
Local factory := DbProviderFactories.GetFactory(providerName) As DbProviderFactory

The error message says it all: The required .Net data provider is not known.

Create a text file with the name XS_DbConnectionCreate_SQLiteConfig.exe.config if XS_DbConnectionCreate_SQLiteConfig.exeis the name of the exe file and the following content:

<configuration>
  <system.data>
    <DbProviderFactories>
      <remove invariant="System.Data.SQLite" />
      <add
        name="SQLite Data Provider"
        invariant="System.Data.SQLite"
        description=".Net Framework Data Provider for SQLite"
        type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite"
        />
    </DbProviderFactories>
  </system.data>
</configuration>

Example 16.28: (XS_DbConnectionCreate_SQLiteConfig.prg) ** You will find the source code in the repository

If that doesn't work for some reason, there is another example that shows how to register the SQLite provider without a config file. This technique will be used for the rest of the examples in this section so there is no need to create a config file for each example.

Example 16.29: (XS_DbConnectionCreate_SQLite.prg) ** You will find the source code in the repository

Getting a provider independent DataReader for SQLite

As soon as a DbConnection object is available, a DbCommand object can be derived from that object through the CreateCommand() method. The next step would be to execute a ExecuteReader() method as usual. That means that as soon the factory "produces" a Connection object everything is the same as with the provider-specific classes.

Example 16.30: (XS_DbCreateDataReaderSQlite.prg) ** You will find the source code in the repository

Getting a DataTable provider independent

For getting a regular DataTable an instance of the DbAdapter class is needed. This is another opportunity for the DbProviderFactory class that provides a CreateDataAdapter() method for this. Unless the provider-specific DataAdapter class the DbAdapter class needs a bit more "caretaking".

var providerName := "System.Data.Sqlite"
var conString := "Data Source=.\Booklib.db3" 
Local factory := DbProviderFactories.GetFactory(providerName) As DbProviderFactory
var ta := DataTable{"books"}
var cn := factory:CreateConnection()
cn:ConnectionString := conString
var cmd := cn:CreateCommand()
cmd:CommandText :=" Select * From Book"
var da := factory:CreateDataAdapter()
da:SelectCommand := cmd
da:Fill(ta)

Example 16.31: (XS_DbAdapterSQLite.prg) ** You will find the source code in the repository

Using provider-independent query parameters

Of course, the use of parameters for queries and other SQL statements is also possible with the provider-independent classes in the namespace System.Data.Common.

The only minor drawback is that the DbParameter class has no AddWithValue method so adding the parameter and its value requires setting two properties.

Begin Using var con := factory:CreateConnection()
    con:ConnectionString := conString
    con:Open()
    var cmd := con.CreateCommand()
    cmd:CommandText :=" Select * From BookCopy Where Price > @Price"
    var paraPrice := factory:CreateParameter()
    paraPrice:ParameterName := "Price"
    paraPrice:Value := 20
    cmd:Parameters:Add(paraPrice)
    var dr := cmd.ExecuteReader()
End Using

Example 16.32: (XS_DbAdapterParametersSQLite.prg) ** You will find the source code in the repository

Data binding is not the concern of the data provider classes but of the "UI package" which means WinForms and WPF. In the simplest case, a DataGrid offers a DataSource property that is assigned a DataTable so that the rows of the DataTable are not only displayed in the DataGrid (or any other UI control) but all the changes made in the DataGrid are immediately written back to the DataTable (but to the database).

Local ta := DAL.GetData() As DataTable
DataGrid1:DataSource := ta

If a current record is needed, that is where the BindingSource class comes into place. It's really simple to use. It just needs the data source (a DataTable for example) it will be assigned to the DataSource property instead of the DataTable for example. A BindingSource offers several advantages. One of them are cursor model with methods like MoveNext(), MoveFirst(), and MovePrevious() and a current property that references the currently active "record". But again, this is only about WinForms data binding.

In the first step a BindingSource is instantiated and gets an already filled DataTable as its data source:

Var sqlText := "Select * From Book"
BsSource := BindingSource{}
BsSource:DataSource := DAL.InvokeSelect(sqlText)
DataGridView1:DataSource := BSSource

The next step is to use the methods of the BindingSource and its Current property for typical data binding activities:

Method MoveCursor(Sender As Object, e As EventArgs) As Void
    Switch ((Button)Sender):Name
      Case "bnMovePrev"
        BsSource:MovePrevious()
      Case "bnMoveFirst"
        BsSource:MoveFirst()
      Case "bnMoveLast"
        BsSource:MoveLast()
      Case "bnMoveNext"
       BsSource:MoveNext()                    
    End Switch

XS_WinFormsDatabinding is a small WinForms project that illustrates how to move a cursor within the rows of a DataTable that contents of a query from a SQLite database through a BindingSource.

Example 16.33: (XS_WinFormsDatabinding) ** You will find the source code in the repository

Alt The DataGridView gets its data from a BindingSource Fig 16.9: The DataGridView gets its data from a BindingSource that encapsulates a DataTable with its rows

Visual Studio support for data binding

Visual Studio offers excellent support for data binding for WinForms applications beginning with Visual Studio 2005. Starting with a data source that encapsulates any kind of database, the data binding can be completely configured by clicking through several assistants without any need for coding. There is a BindingNavigator component that looks like the typical "data navigation control" but is an ordinary ToolStrip control that contains predefined buttons with icons for the standard database actions where every action is based on a current record.

There are two reasons that this promising alternative is only mentioned briefly in this chapter:

  1. The required Data Sources are only available for C# and Visual Basic projects
  2. Everything can be accomplished through code and therefore in X# too with less effort

One note for readers who are not familiar with "components" that are part of the Visual Studio toolbox. When you drag items like a BindingSource, a BindingNavigator, or a DataSet onto a form, they will not be placed on the surface of the form because they are not controls. Instead, they will be placed as components below the form designer. That only means that the code for instantiating the class is already part of the designer.prg file and the properties of the instance can be set through the property window.

Alt The WinForms databinding classes are offered through the toolbox as "components" Fig 16.10: The WinForms data binding classes are offered through the toolbox as "components"

Visual Studio support for databases

Visual Studio has excellent "support" for databases and their content. This is completely independent of any programming language or application type. It's all about connecting to a database and being able to browse the tables and their fields and the content of a table course. It's all happening inside the Server Explorer window. The first step is to connect to a database. The second step is to browse its tables and views.

This will also work for a SQlite database when an appropriate extension has been installed. There is no extension from Microsoft but there are several options. When only a basic functionality is needed (which is usually the case) the SQLite extension from Brian Lambson and Erik Ejlskov Jensen (who is the author of several other extensions) works very well. You will find the extension in the extension catalog (Extensions -> Manage Extensions).

The project website is https://github.com/bricelam/VS.Data.Sqlite.

After connecting to a SQLite database file it's possible to browse the tables and retrieve their data. Creating new tables or modifications to an existing table is not possible. I would use a SQLite admin tool for this kind of task.

Alt Connecting to a SQLite database in the Server Explorer window Fig 16.11: Connecting to an SQLite database in the Server Explorer window

The "grand finale"

Since the main purpose of this little book is explaining X# programming with examples, the probably most important chapter of this book will conclude with a "complete" WinForms application that allows updating, inserting, and deleting the rows of database data like it was done in the 90s.

The solution TheDataBindingExample_16.sln consists of three separate projects:

  1. TheDataBindingExample_16.xsproj
  2. DALDB.xsproj
  3. TestDALBD.xsproj

The idea is to separate the Data Access Layer (DAL) into a separate class library. The third project is a Unit Testing project (more on this topic in another chapter of this book).

The database

The DBMS is again SQLite The DAL uses the provider independent classes from System.Data.Common, so it's a bit of a challenge because both the configuration of the database and the registration of the data provider are done through config files.

An important aspect is the fact that there is no App.config file for a class library. The DABDB project therefore uses a more flexible approach where the config file can be any file. For this project, the config file is Db.config. It does not register the data provider, it only contains the provider name and the connection string.

<configuration>
  <connectionStrings>
    <clear/>
    <add name="bookLib"
      providerName = "System.Data.SQLite"
      connectionString = "Data Source=.\BookLib.db3"
    />
  </connectionStrings>
</configuration>

The UI

The UI consists of a single form with a DataNavigator control and two DataGridView" controls. The idea behind the DataNavigator control, which has been part of the WinForms control list since .Net 2.0 and Visual Studio 2005, is to make creating a database-centric UI as simple as possible. In an "ideal case" (when using either C# or Visual Basic) without any need for coding. Although it is more or less a preconfigured ToolStrip control, it has some benefits like a BindingSource property to assign a BindingSource object and a few buttons (based on the ToolStripButton class) for navigation through the rows or items of the data source.

The data source is the result of a simple Select statement:

BsSourceBooks := BindingSource{}
var sqlText := "Select * From Book"
BsSourceBooks:DataSource := DBHelper.InvokeSelect(sqlText)
DataGridView1:DataSource := BsSourceBooks
bindingNavigator1:BindingSource := BsSourceBooks

Since there is a 1:n relation between the book and the bookCopy table, selecting a book row should display all rows from the BookCopy table with the bookId of the currently selected row in the second DataGridView control:

BsSourceBookCopies := BindingSource{}
var bookId := ((DataRowView)BsSourceBooks:Current)["Id"]
Var sqlText := i"Select * From BookCopy Where Id={bookId}"
BsSourceBookCopies:DataSource := DBHelper.InvokeSelect(sqlText)
DatagridView2:DataSource := BsSourceBookCopies

Updating the database

This is the part where the most "brain power" was needed. Although the BookCopy table has a primary key and therefore it would be sufficient to use an SQLiteCommandBuilder to create the three command objects necessary for updating the table, to maximize the learning result, every update is done "the hard way".

Inserting rows

Inserting a row is done by an INSERT statement when the rowState is Added. The first part of the statement is constructed by the column names:

sqlText := i"Insert Into {Table.TableName} ("
ForEach col As DataColumn In Table:Columns
  // Don't include the primary key and null columns
  If !row[col] Is DBNull && col:ColumnName <> Table:PrimaryKey[1]:ColumnName
     sqlText += i"{col.ColumnName}, "
  Endif
Next

After removing the final comma, the values are added:

sqlText += i") Values ("
Foreach col As DataColumn In Table:Columns
  // Don't include the primary key and null columns
  If !row[col] Is DBNull && col:ColumnName <> Table:PrimaryKey[1]:ColumnName
     If col:DataType == Typeof(String) || col:DataType == Typeof(Datetime)
        sqlText += i"'{row[col.ColumnName]}', "
     Else
        sqlText += i"{row[col.ColumnName]}, "
     Endif
  Endif
Next

The primary key should not be part of the INSERT because it's an auto-increment field. A distinction between type names is necessary because some types are included in apostrophes, others not.

Deleting rows

Deleting a row is simple because there is a primary key.

sqlText := i"Delete From {Table.TableName} Where {Table.PrimaryKey[1]} == {row[Table.PrimaryKey[1]]}"

Updating rows

Inserting a row is done by an UPDATE statement when the rowState is Modified. To include only fields that have been changed the original value of a column has to be different from the current value:

sqlText := i"Update {Table.TableName} Set "
ForEach col As DataColumn In Table:Columns
   If col:DataType != Typeof(DBNull)
      If col:ColumnName <> Table:PrimaryKey[1]:ColumnName
         If row[col, DataRowVersion:Original] != row[col, DataRowVersion:Current]
            If col:DataType == Typeof(String) || col:DataType == Typeof(Datetime)
                sqlText += i"{col.ColumnName} = '{row[col.ColumnName]}', "
            Else
                sqlText += i"{col.ColumnName} = {row[col.ColumnName]}, "
            Endif
          Endif
       Endif
   Endif
Next

After removing the last comma, the Where clauses are constructed column by column except for the primary key:

sqlText := sqlText:Substring(0, sqlText:Length - 2)
sqlText += i" Where {Table.PrimaryKey[1]} = {row[Table.PrimaryKey[1]]} "
Foreach col As DataColumn In Table:Columns
   If !row[col] Is DBNull && col:ColumnName <> Table:PrimaryKey[1]:ColumnName
      If col:DataType == Typeof(String) || col:DataType == Typeof(Datetime)
          sqlText += i"And {col.ColumnName}='{row[col.ColumnName,DataRowVersion.Original]}' "
      Else
         sqlText += i"And {col.ColumnName}={row[col.ColumnName, DataRowVersion.Original]} "
        Endif
   Endif
Next

It is important to understand why a comparison between the current value of a column and its original value is necessary. The idea is to be able to detect if the row has been changed (by another user) since the row was retrieved. This means that the current value of each column has to be compared with its original value. The value that had been retrieved through the Select command. Since both values are stored in the row, the comparison is easy.

If the value has been changed, the Where clause is not fulfilled and the update will fail. There will be no exception though but a return value of 0. Now the real work begins. How should we handle this situation? As already stated, the recommended option is to let the user decide whether the new values should override the changed values or not.

My recommendation is to keep it simple by updating the database as soon as possible to reduce the probability of a change made by another user to the same row. And if it happens, overwrite the values. The other user will likely realize that his or her update didn't succeed. But of course, this option is far from perfect either.

Registering the SQLite data provider

The App.config file of the main project registers the data provider.

<configuration>
  <system.data>
    <DbProviderFactories>
      <remove invariant="System.Data.SQLite" />
      <add
        name="SQLite Data Provider"
        invariant="System.Data.SQLite"
        description=".Net Framework Data Provider for SQLite"
        type="System.Data.SQLite.SQLiteFactory, System.Data.SQLite, Version=1.0.117.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139"
        />
    </DbProviderFactories>
  </system.data>
</configuration>

The deployment directory

Whatever references have been set and whatever project settings are used, in the end, all required files must be in the same directory. This is especially important for the file System.Data.SQLite.dll. Not only does it have to be present, but it has also to be the right version and it has to have the right architecture setting ("x64" in this case). Otherwise, a runtime exception occurs.

Alt The content of the output directory Fig 16.12: The content of the output directory

Switching to an Oracle database

There are only two ingredients required for switching the sample project from SQLite to Oracle (or any other DMBS):

  1. The appropriate provider name ("Oracle.ManagedDataAccess.Client" in this case) and connection string in the config file
  2. A reference to the Oracle.ManagedDataAccess.dll assembly (which can be downloaded for free from Oracle)

The same applies to any other DBMS as well.

Since Oracle is such an important topic, I have provided a small example as well. Please beware of the fact that the example cannot work if you don't provide the following information:

  1. the name of the database user
  2. a password for the database user
  3. the name of the host (hostname or IP address)
  4. the name of the Oracle service

This information has to be put in the config file (XS_OracleDatabaseAccess.exe.config). Although the config file would usually contain the complete connection string that includes all the listed ingredients, this method might have some small advantages.

And, the Oracle.ManagedDataAccess.dll file with the managed provider has to be in the same directory as the exe file. Please note, that this is to be the .Net Framework version (there is also a version for the .Net runtime like .Net 7).

Accessing an Oracle database

The topic of accessing an Oracle database has "haunted" me for most of my developer life. Accessing an MS SQL database was always easy, straightforward, and a nice experience, accessing an Oracle database was a completely different story. It started with the need to configure some obscure listener text files and ended with the fact, that there is no concept of a database in the Oracle world. But that's my personal experience. Besides that, Oracle offers a powerful DMBS that is probably used by most large corporations and government agencies worldwide. And, accessing an Oracle "database" became easy too. No more listeners, all its needs are a connection string and a single assembly that you can download from the Oracle developer site for free. And as Microsoft does with SQL Server Express, Oracle also offers a free version of their DMBS with the name Oracle Database Express Edition. And for administration, I like the SQL Developer even a little more than the SQL Server Management Studio.

**Example 16.34: (XS_OracleDatabaseAccess.prg) **

// A simple example for accessing an Oracle "database"
// Compile with xsc 16_OracleDatabaseAccess.prg /r:Oracle.ManagedDataAccess.dll /r:System.Configuration.dll

Using System.Configuration
using System.Data
using Oracle.ManagedDataAccess.Client

Function Start() As Void
   var DbName := ConfigurationManager.AppSettings["DbName"]
   var Pw := ConfigurationManager.AppSettings["Pw"]
   var Hostname := ConfigurationManager.AppSettings["Hostname"]
   var Servicename := ConfigurationManager.AppSettings["Servicename"]
   var conStr := i"Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST={Hostname})(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME={Servicename})));User Id={DbName};Password={Pw}"
   var sqlCmd := "Select * From Adress"
   var da := OracleDataAdapter{sqlCmd, conStr}
   var ta := DataTable{"Adress"}
   var retVal := da:Fill(ta)
   ?i"{retVal} rows retrieved"
   ForEach row As DataRow in ta:Rows
    ?i"AdrNr: {row[""adrNr""]} Nachname: {row[""nachname""]}"
   Next