Avalonia: Master – Detail Relationships with Tripous.Avalon

Master – Detail Relationships & Filtering

Github repository

Managing related data (Master-Detail) is one of the most demanding aspects of building business applications.

Tripous.Avalon simplifies this process, allowing developers to define relationships between BindingSources with a single line of code, automatically handling UI synchronization and data integrity.

Continue reading
Posted in Avalonia, C#, Desktop, Dev | Tagged , , , | Leave a comment

Beyond XAML: Bridging the Gap in Avalonia UI with Dynamic Runtime Data-Binding

Tripous.Avalon

Dynamic Data-Binding & Management for Avalonia UI

Github repository

Tripous.Avalon is a lightweight, high-performance library designed to simplify data management and UI synchronization in Avalonia UI applications. It provides a robust abstraction layer between your UI and your data sources, whether they are DataTables or Generic Lists.

The Philosophy: Runtime Dynamics over Static XAML

The majority of Avalonia and WPF documentation focuses heavily on Design-time binding via XAML. While this works for simple, static forms, it often falls short in complex, professional-grade applications.

In the real world, enterprise applications require Dynamic Binding:

  • UI layouts that are generated or modified at runtime.
  • Data sources and schemas that aren’t known until the application is executing.
  • The need to bind, unbind, or re-bind controls programmatically without bloating the XAML with complex converters.

Tripous.Avalon is built specifically for this. It moves the binding logic into a centralized, programmable DataSource object, allowing for a flexible and maintainable codebase where the UI responds dynamically to the data structure.

Filling the Gap: Why Tripous.Avalon?

Surprisingly, modern UI frameworks often lack a native, high-level mechanism to handle common data entry requirements out-of-the-box. Developers are usually left to manually implement:

  1. Currency Management: Tracking the “Current” record across multiple synchronized controls (TextBoxes, Grids, etc.).
  2. Standardized Notifications: A unified way to listen for OnAdding, OnChanging, or OnDeleting events across any data type (List or Table).
  3. Lookup Logic: The complex task of mapping IDs to Display Names in ComboBoxes and ListBoxes at runtime.

Tripous.Avalon fills this gap by providing a ready-to-use engine that treats a List or a DataTable with the same first-class respect, offering a rich event lifecycle and simplified API that empowers the developer to build data-heavy applications faster.

Key Features

  • Unified IDataLink: Use the same C# code to manage System.Data.DataTable and IList.
  • One-Line Binding: Bind any Avalonia control (TextBox, CheckBox, ComboBox, DatePicker, DataGrid, etc.) with a single method call.
  • Advanced Lookup Support: Effortless binding for lookup controls with automatic synchronization between ID values and display text.
  • Lifecycle Events: Hook into the data flow with cancelable events for validation and business logic.
  • UX Enhancements: Built-in keyboard navigation support (e.g., Enter as Tab, F4 for Dropdowns).
Continue reading
Posted in Avalonia, C#, Dev | Tagged , , | Leave a comment

Serialize and deserialize JSON using JsonSerializer

Serialize and deserialize JSON using JsonSerializer

This text explores the use of the JsonSerializer .Net class in serializing and deserializing .Net classes to JSON.

.Net types related to serialization are found in the following namespaces

There is of course the excellent Newtonsoft.Json library but JsonSerializer is worth using it since it is the native solution provided by .Net and there is no need to install any NuGet package in order to use it.

JsonSerializer can be used with .Net Core 3.0 and later and with .Net Standard 2.0.

The full text and demo project can be found on GitHub.

Basics

The following code entities are used in this text.

public enum Status
{
    None,
    Pending,
    InProgress,
    AllCompleted
}

public class Part
{
    public string Code { get; set; }
    public decimal Amount { get; set; } 
    public bool IsCompleted { get; set; }
}

JsonSerializer is a static class. No need to create an instance.

// serialization
Part P = new(); 
string JsonText =  JsonSerializer.Serialize(P);

// de-serialization
Part P2 = JsonSerializer.Deserialize<Part>(JsonText);

// or
P2 = JsonSerializer.Deserialize(JsonText, typeof(Part)) as Part;

JsonSerializerOptions

The Serialize() and Deserialize() methods accept a JsonSerializerOptions parameter.

JsonSerializerOptions JsonOptions = new();

// serialization
Part P = new(); 
string JsonText = JsonSerializer.Serialize(P, JsonOptions);

// de-serialization
Part P2 = JsonSerializer.Deserialize<Part>(JsonText, JsonOptions);

// or
P2 = JsonSerializer.Deserialize(JsonText, typeof(Part), JsonOptions) as Part;

Continue reading

Posted in C#, C# | Tagged , , | Leave a comment

Database Schema information for various RDBMS

This Github repository contains a number of SQL SELECT statements to get Database Schema information about tables, columns, views, triggers, stored procedures, constraints, etc. for the following RDBMS

  • FirebirdSql
  • MsSql
  • MySql
  • PostgreSql
  • Sqlite
  • Oracle

Under the root folder there is a sub-folder for each RDBMS containing *.sql files with the SELECT statements that return Database Schema information.

Returned Field Lists of the SELECT statements

The SELECT statements return the following Field Lists.

Tables

  • SchemaName
  • TableName

Views

  • SchemaName
  • TableName
  • Definition

Table and View Fields

  • SchemaName
  • TableName
  • FieldName
  • DataType
  • DataSubType
  • IsNullable
  • SizeInChars
  • SizeInBytes
  • DecimalPrecision
  • DecimalScale
  • DefaultValue
  • Expression
  • OrdinalPosition

Indexes

  • SchemaName
  • TableName
  • IndexName
  • FieldName
  • FieldPosition
  • IsUnique
  • IndexType

Triggers

  • SchemaName
  • TableName
  • TriggerName
  • TriggerType
  • Definition

Procedures

  • SchemaName
  • ProcedureName
  • ProcedureType
  • Definition

Constraints

  • SchemaName
  • ConstraintName
  • ConstraintType
  • TableName
  • FieldName
  • ForeignTable
  • ForeignField
  • UpdateRule
  • DeleteRule
  • FieldPosition

Sequences

  • SchemaName
  • SequenceName
  • CurrentValue
  • InitialValue
  • IncrementBy

Sample Database

This repository contains sql files to create the widely used dvdrental sample database and feed it with data. That database can then be used to test the SQL SELECT statements that return Database Schema information.

Sample Database Schema

The schema for the sample database can be found at the \Db\Schema folder. There is a sub-folder for each RDBMS containing *.sql files with DDL statements.

Sample Database data

The data for the sample database can be found at the \Db\Data folder. There is a sub-folder for each RDBMS containing *.sql files with INSERT INTO statements.

The order of *.sql file execution

The correct order of execution follows:

  • Tables
  • Data (Insert all data. File names are numbered.)
  • Foreign Keys
  • Indexes
  • Views
  • Triggers

CAUTION Data *.sql files should be executed with a certain order, as their file name dictates.

How to execute *.sql files with Dbeaver

I use the community edition of the excellent Dbeaver tool. As their site states DBeaver Community is a free cross-platform database tool for developers, database administrators, analysts, and everyone working with data. It supports all popular SQL databases like MySQL, MariaDB, PostgreSQL, SQLite, Apache Family, and more.

In Dbeaver

  • Select the database node
  • Click on the toolbar button SQL | New SQL Script
  • Copy the content of an *.sql file and paste it to the Dbeaver SQL Script Editor
  • In SQL Script Editor, in the left-side toolbar, click on Execute SQL script button

How to create the sample Database

Some databases can be created using a tool like Dbeaver. Others require CLI tools.

FirebirdSql

The isql Firebird tool may used in creating a new Firebird database.

  • Run a terminal as administrator
  • cd to where the isql is installed, e.g. C:\Program Files\Firebird\Firebird_5_0,
  • in terminal type in the following 3 lines, and hit Enter key after each one.
isql

CREATE DATABASE 'C:\path\to\DVD.fdb' USER 'SYSDBA' PASSWORD 'YourPassword' PAGE_SIZE 32768 DEFAULT CHARACTER SET UTF8;

exit;

Here is a copy of my terminal.

C:\>cd C:\Program Files\Firebird\Firebird_5_0
C:\Program Files\Firebird\Firebird_5_0>isql
Use CONNECT or CREATE DATABASE to specify a database
SQL> CREATE DATABASE 'C:\path\to\DVD.fdb' USER 'SYSDBA' PASSWORD 'YourPassword' PAGE_SIZE 32768 DEFAULT CHARACTER SET UTF8;
SQL> exit;
C:\>

MsSql, MySql or MariaDb and PostgreSql

Using Dbeaver.

  • Add a connection to a Microsoft Sql Server, e.g. localhost,
  • expand the connection tree node,
  • right click on the Databases tree node
  • click on the Create New Database menu item
  • in the Create... dialog box provide a database name, e.g. DVD
  • click on OK button

Sqlite

Using Dbeaver.

  • Click the menu Database | New Database Connection,
  • select the Sqlite driver,
  • in the Connection dialog box provide a path for the database file, e.g. C:\Temp\DVD.db3 and
  • click on the Create button.

How to drop the sample Database

FirebirdSql and Sqlite

Close any connection to the database.

Go to database’s folder and delete the database file.

MsSql, MySql or MariaDb and PostgreSql

Close any connection to the database.

In Dbeaver select the database and hit the Delete key.

Oracle Databases

In an Oracle Server, there are several types of databases that can be created. Two of them are the most common:

  • Container Database (CDB): A database that contains multiple pluggable databases (PDBs), introduced in Oracle 12c.
  • Pluggable Database (PDB): A self-contained database that can be plugged into a CDB, providing improved manageability and scalability.

Common users are users that exist in the root container (CDB$ROOT)
and can be accessed from any pluggable database (PDB) in the CDB.

Trying to create a common user in a CDB database as

    CREATE USER User_Name IDENTIFIED BY Password ACCOUNT UNLOCK;

results in the following error:

    ORA-65096: invalid common user or role name

To create a common user in a CDB database the user name must be prefixed with C## or c##

    CREATE USER c##User_Name IDENTIFIED BY Password ACCOUNT UNLOCK;

Oracle Database, Schema and User

In Oracle a database is a collection of Schemas.

A Schema is owned by a User. User and Schema share the same name. In Oracle, users and schemas are essentially the same thing.

A Schema is a collection of database objects, such as tables, views, indexes, etc.

That is a Schema/User has his own collection of tables, views, etc., in a database.

The sqlplus Command Line Tool

The sqlplus Command Line Tool provides access to the Oracle dabases. Using the sqlplus the user can

  • Startup and shutdown an Oracle database
  • Connect to an Oracle database
  • Enter and execute SQL commands and PL/SQL blocks
  • Format and print query results

The sqlplus must be in the system path.

To connect to sqlplus the following is used

    sqlplus SYSTEM/YourPassword@localhost as SYSDBA

To find out already existing Pluggable Databases (PDBs)

Connect to sqlplus.

    sqlplus SYSTEM/YourPassword@localhost as SYSDBA

And then any of the following

    Show pdbs; 
    select * from DBA_PDBS;
    select name, open_mode from v$pdbs;  

To create a Pluggable Database (PDB)

The following uses the PDBSEED to create a Pluggable Database, meaning that the PDBSEED is used as a template.

The following assumes that Oracle Database Express Edition (XE) is used and it is installed in C:\Oracle.

    sqlplus SYSTEM/Password@localhost as SYSDBA

    create pluggable database PDB1 admin user OwnerUserName identified by "OwnerUserPassword"
        ROLES = (dba) 
        default tablespace PDB1_USERS
        datafile 'C:\Oracle\oradata\XE\PDB1\pdb1_users01.dbf' size 2g autoextend on
        storage (maxsize 10g max_shared_temp_size 10g)
        file_name_convert=('C:\Oracle\oradata\XE\pdbseed', 'C:\Oracle\oradata\XE\PDB1\');

    alter pluggable database  PDB1 open read write force;   

    alter user OwnerUserName quota unlimited on PDB1_USERS;

To create a new User in a Pluggable Database (PDB)

Connect to sqlplus.

    sqlplus SYSTEM/YourPassword@localhost as SYSDBA

List the pluggable databases.

    select name, open_mode from v$pdbs;  

Get a name of a pluggable database from the previous list, e.g. XEPDB1 and open it.

    alter session set container=XEPDB1;
    alter pluggable database XEPDB1 open;

Create the new user.

    create user YourUserName identified by YourPassword ACCOUNT UNLOCK;
    grant all privileges to YourUserName identified by YourPassword;

There is a lot of variations in granting privileges to a user.

To drop a Pluggable Database (PDB)

Connect to sqlplus.

    sqlplus SYSTEM/YourPassword@localhost as SYSDBA    

Close any connection.

    alter pluggable database PDB1 close immediate instances=ALL; 

To drop a pluggable database and keep its datafiles.

    drop pluggable database PDB1 keep datafiles;     

To drop a pluggable database along with its datafiles.

    drop pluggable database PDB1 including datafiles;

Create the sample Database for Oracle

    sqlplus SYSTEM/YourPassword@localhost as SYSDBA

    create pluggable database DVDDB admin user DVD identified by "DVD_User_Password"
        ROLES = (dba) 
        default tablespace DVDDB_USERS
        datafile 'C:\Oracle\oradata\XE\DVDDB\dvddb_users01.dbf' size 2g autoextend on
        storage (maxsize 10g max_shared_temp_size 10g)
        file_name_convert=('C:\Oracle\oradata\XE\pdbseed', 'C:\Oracle\oradata\XE\DVDDB\');

    alter pluggable database DVDDB open read write force;   

    alter session set container=DVDDB;

    alter pluggable database DVDDB open;

    alter user DVD quota unlimited on DVDDB_USERS;

Schema and Data origin, thanks to Ottmar Gobrecht

Sample schema and sample data files were taken from Ottmar Gobrecht’s sample-data-sets-for-oracle repository and reworked for the supported RDBMS.

Many many thanks to Ottmar Gobrecht for his work.

Posted in Databases, Dev | Tagged | Leave a comment

Databases, remarks and thoughts

This text is about relational databases from the point of view of a software developer. It contains just remarks and not strict guidelines. Thoughts of what to do and avoid to do.

Table types

A database table may belong to one of the following categories, in regard to the nature of data and the number of rows it may have: Master, LookUp, Transaction and Correlation table.

  • Master. A CUSTOMER or MATERIAL table is considered to be a master table. A master table does not records historical data. It functions merely as a registry of main entities in an application. Other tables may have foreign keys to it. A master table usually has a lot of columns and many thousand rows. A master table may contain foreign keys to other master or lookup tables.
  • Lookup. A COUNTRY, STATE, MEASURE_UNIT or OCCUPATION table is considered to be a lookup table. A lookup table does not records historical data. It functions merely as a registry of secondary entities in an application. Other tables may have foreign keys to it. A lookup table usually has a few columns and, at most, a few hundred rows. Usually has ID and NAME or ID, CODE and NAME columns. A perfect lookup table contains no foreign keys to other tables.
  • Transaction. A transaction table records transactions of master tables. In many cases it contains a datetime column, e.g. ENTRY_DATE. Transaction tables sometimes are called historical tables or even trade tables. A transaction table can easily have millions of rows. An ORDERS or TRADE table is considered to be a transaction table. Transaction data very often require two or even more tables in a master-detail relationship forming a table tree. For instance ORDERS and ORDER_LINES, where the ORDERS, the master transaction table, contains information regarding the Customer, the Date of transaction etc. while the ORDER_LINES, the detail transaction table, records information regarding goods, quantities and prices and an ORDER_ID foreign key to the ORDERS table.
  • Correlation. A correlation table correlates two, or even more, master tables. Usually a correlation table records only IDs from those master tables an nothing more. A correlation table is used in recording one-to-one, one-to-many and many-to-many relationships. For instance a CAR and a DRIVER master table may require a CAR_DRIVER correlation table, having a CAR_ID and a DRIVER_ID foreign key columns to the respective tables. A car driver may have many cars in his responsibility.

Regarding business logic, a database application may logically divided into business modules. Each module, say Customer or Store or Sales module, uses a set of related tables. For instance an imaginary Sales module may use the CUSTOMER, MATERIAL, TRADE and TRADE_LINES tables.

Normalization

Normalization is a term used in database programming to describe the techniques involved in designing tables in order to minimize duplication of information.

For instance, in a CUSTOMER table it’s not wise to have an OCCUPATION_NAME string column. Instead you use an OCCUPATION_ID foreign key column pointing to the ID primary key column of the OCCUPATION table. The same stands true for an ORDER_LINES transaction table and a MATERIAL master table. The ORDER_LINES should have a MATERIAL_ID, pointing to MATERIAL table, and not a MATERIAL_NAME column.

In short, a bit of information is stored once and only once and in a certain table. Any other table that wants access to that bit of information it just maintains a reference to it, utilizing a foreign key column. That’s called normalization.

Normalization is not a panacea. It is used in a so called Production database or Operational database, that is a database used in the daily processing of transactions. A database used by an ERP system is such a database.

Normalization is not used in a Warehouse database where the primary use is for reporting, data analysis and data mining operations. That kind of databases contain non-normalized data.

You may check entries such as Database Normalization, Data Warehouse and Data Mining at wikipedia.

But regarding Normalization, please, don’t spend much time reading those texts, unless you’ re a student looking reference material for the next semester. Conventional wisdom is quite enough.

Have a COUNTRY table with just an ID, CODE and NAME. In all other tables where Country information is needed, a COUNTRY_ID foreign key referencing the COUNTRY.ID column is enough.

Primary keys

I follow the next rule:

Every table in a database has

  • a single column as its Primary Key
  • named ID (not COUNTRYID or countryid or country_id, just ID)
  • of data type integer or GUID string
  • it is unique (that is it uniquely identifies its row)
  • it has no business meaning at all (which is very very important)
  • it never changes or re-used
  • and the application user almost never sees it.

A Primary Key should consist of a single column.

Avoid Composite Primary Keys, sometimes called Compound Keys, that is Primary Keys consisting of two or more columns. Single column Primary Keys are easier to maintain and locate in a search or filter.

There are cases though where a composite Primary Key may be used, as is the case of correlation tables. Even in this case though I find it easier to have a single column as Primary Key and use a Unique Constraint on the correlated columns in order to guarantee the uniqueness of their combination.

I have not yet found a case where a single column Primary Key is bad.

The data type of a Primary Key column may be an integer or a GUID string.

Prefer GUID strings although integers are easier to human eye. Having Primary Keys with GUID strings is easier to migrate data from one system to another.

Integer Primary Keys come in two forms:

  • auto-increment columns, such as MS SQL identity columns or MySql AUTO_INCREMENT columns.
  • or unique number generators as in Interbase/Firebird and Oracle.

Primary Keys sometimes are called Object Identifiers or OIDs.

Triggers and Stored Procedures

No stored procedures.

No triggers.

Stored procedures and triggers tie an application to a certain RDBMS such as Oracle, MS SQL, etc. because each RDBMS has its own stored procedure and trigger language dialect.

Besides that, using stored procedures and triggers distributes and fragments the business logic code. Business logic code should be in one place. And this place is the application source code.

Searching the web for “are stored procedures faster than queries?” results in a lot of debates.

There is a rumor floating around that stored procedures are executed in a fraction of time than normal SQL statements do, and that’s because the database server pre-compiles the stored procedure code.

I’m not quite sure about that rumor, but I have to say that a decent database server should retain execution plans for all SQL statements in its internal cache, and not just for stored procedures. And if it doesn’t then I consider that as a major flaw.

I admit though that my opinion lacks the proper validity.

To force an RDBMS to generate and use an execution plan, in subsequent calls, use the exact same SQL statement more than once. The secret to achieve this is to use parameterized SQL statements. Something like “select * from CUSTOMER where ID = :ID“. From a database server point of view this statement looks much better than the usual “select * from CUSTOMER where ID = 1234“.

Every SQL statement needs an execution plan. The first time a SQL statement is executed, be it a stored procedure, a trigger, a SELECT, or any other statement, the RDBMS generates an execution plan. And then executes the statement. The next time it just re-uses the previously generated plan. Thus in next executions the statement is executed faster than the first time.

Views

All RDBMS I know off use the same Statement in creating a view.

create view VIEW_NAME as
select
  ...
from
  ...

The problem with the different RDBMS language dialect is not present here.

But again, as with stored procedures and triggers, using views distributes and fragments the business logic code. Business logic code should be in one place. And this place is the application source code.

Constraints and Referential Integrity

not null and unique constraints are necessary and should be part of the create table statement.

check constraints, e.g. check (Age >= 21), enforce a rule very different from a not null or unique rule. Prefer enforcing these check rules in the application code.

Referential Integrity dictates that in order to add a value in the CUSTOMER.COUNTRY_ID column, this value must exist in the COUNTRY.ID column.

The rule here is: a reference must be valid.

Declarative Referential Integrity is a term denoting that RDBMS is responsible in enforcing the rule of integrity.

This is done using a declaration, known as foreign key constraint, either when creating the table or later using an alter table statement.

create table Customer (
  ID            integer not null primary key,
  COUNTRY_ID    integer not null
  ...
  foreign key(COUNTRY_ID) references COUNTRY(ID) 
)

There is another way though in enforcing the integrity rule: use application code to enforce it.

Which is better depends on many factors of whom the most important is to know what you’re doing and why.

There are systems that do not use declarative referential integrity. Microsoft’s Navision/Business Central is one of them. Navision/Business Central enforces referential integrity using application code.

One thing is for sure: it is easier to migrate data from one system to another when there is no declarative referential integrity.

Indexes

An index helps an RDBMS to search, filter and sort the data faster. Without an index it has to do a full table scan when executing a query. So indexes are very important.

Deciding what to index, and what not, is an art and a science at the same time. What is good for an application it may be bad for another.

In deciding what to index there are no strict rules. Just some remarks and guidelines:

  • avoid over indexing. Every INSERT, UPDATE or DELETE statement forces the RDBMS not only to update the data, but the indexes too. Having many indexes will slow execution
  • index data are updated whenever data are updated
  • primary keys and unique constraints are maintained by RDBMS using an index
  • indexes take part in the generation of an execution plan, so contradicting indexes may confuse the RDBMS optimizer
  • prefer single-column indexes over multi-column indexes
  • in multi-column indexes place first the columns with the fewer values, e.g. MARRIED, NAME
  • in an ERP or similar application, with a filtering UI, it is not possible to know what the user decides to place in the query filter
  • learn about index selectivity
  • learn what index statistics means to your RDBMS
  • search the web for SQL Indexing best practices

Maintenance and database health

Here are some remarks and guidelines

  • Most RDBMS come with a number of maintenance and health check tools. Check what is available and how to use it.
  • Reindex frequently. How frequently depends on the system.
  • Check database health frequently
  • Schedule frequent backups
  • With RDBMS having a Full Recovery Model, such as MS SQL, examine whether the Simple is better for the case
  • Create and use maintenance time-scheduled jobs using tools such as Windows Task Scheduler or Linux Crontab.
Posted in Databases, Dev | Tagged | Leave a comment

Microsoft Authentication Library for .NET

Microsoft Authentication Library for .NET

Source code on github.

Microsoft Authentication Library for .NET (MSAL.Net) is a .Net Library that enables applications to provide authentication operations using Microsoft Entra ID.

This text explains what MSAL is and how to use it in .Net applications along with source code and demo applications.

Desktop Login

Microsoft Entra ID

The Azure Active Directory is now named Microsoft Entra ID.

Microsoft Entra ID is an Identity Provider and Access Management service cloud-based service.

Microsoft Entra ID provides authentication and authorization services to

  • Microsoft Azure
  • Microsoft 365
  • Dynamics 365
  • other Microsoft services and solutions
  • and third-party services.

Microsoft Authentication Library for .NET (MSAL.Net)

An application uses the MSAL.Net to aquire an Access Token from Microsoft Entra ID

  • in order to allow access to its users
  • or access another protected Web, Desktop or Mobile application or service.

An application should be registered to Microsoft Entra ID, in order to be protected.

MSAL.Net is available on several .NET platforms (desktop, mobile, and web).

Main topics in Microsoft documentation

There are two main topics in documentation

MSAL.Net uses OAuth

MSAL.Net uses OAuth flows. A basic familiarity with OAuth is required.

Here is an introductory short text about OAuth which is a summary of what OAuth is and the key terms related to it. Continue reading

Posted in C#, C#, Dev, Tutorials | Tagged , | Leave a comment

OAuth v2 at a glance

OAuth v2 at a glance

A summary of what OAuth is and the key terms related to it.

OAuth

OAuth (Open Authorization) is an authorization specification for access delegation.

A User may grant access to his protected information stored in a website or service, to other services, applications and websites, without giving his credentials to those services or applications.

The authorized service or application acts on behalf of the User (delegation) in order to access the protected information. The User may specify the permissions the authorized service or application should have.

OpenID Connect (OIDC)

OpenID Connect (OIDC) is an authentication specification built on top of OAuth v2.

Client applications use the OIDC protocol to request an ID token in order to authenticate a user.

Participants

Participant is a term used by OAuth in order to denote a person or system participating in the procedure that gives access to protected resources.

Following is the list of OAuth participants.

  • Resource Owner. The User who has ownerhip of the protected resources.
  • Client. A service or application requesting permissions to access the protected resources.
  • Authorization Server. A service that authenticates the Resource Owner and, under his authorization, issues the Access Token.
  • Resource Server. The service where the protected resources are stored.

Continue reading

Posted in Dev | Tagged | Leave a comment

Two Factor Authentication in C# using a mobile phone Authenticator application

Two Factor Authentication in C# using a mobile phone Authenticator application

Source code on github.

For security reasons many today applications or web sites use a two-step authentication.

This type of authentication is called Multi Factor or Two Factor Authentication.

The user is prompted to enter username and password and right after that is prompted to enter another code.

For that second step the application developer has a number of choices:

Authenticator applications

An authenticator application is a mobile phone application. There is a number of free authenticator applications

In a mobile authenticator application the user may create a list of accounts. Such an account is comprised of

  • the user name or title
  • the name of the protected application, web site or service the account relates to
  • a secret key produced by that protected application, web site or service.

Such an account may be created manually by the user, by entering the three above elements, or just by using the barcode scanner of the authenticator application in order to scan a QR Code produced by the protected application, web site or service. That QR Code incorporates all the three elements needed in creating an authenticator account.

After that account setup the authenticator application generates a unique six-digit code, called Time-based One-Time-Password (TOTP), every 30 seconds.

The authenticator application generates the TOTP six-digit code based on the time and the secret key of the account. No internet connection is required.

The user gets that TOTP six-digit code from the authenticator application and enters it in the protected application.

The protected application validates the entered code using, again, the time and the secret key of the account.

Both the authenticator application and the protected application store the secret key of the account in order to be able to produce and validate TOTP codes.

The protected application, web site or service, has no connection or knowledge about the authenticator application. In fact any TOTP compliant authenticator can be used.

The Internet Engineering Task Force (IETF) describes the TOTP algorithm in the rfc6238 specification. Both the TOTP authenticator application and the protected application implement the TOTP algorithm. Continue reading

Posted in C#, C#, Dev | Tagged , | Leave a comment

A simple and efficient Logging framework in Free Pascal and Lazarus.

A simple and efficient Logging framework in Free Pascal and Lazarus.

The Tripous Logging system is enough simple and easy to use.

The whole project can be downloaded from github.

There is a text that describes the procedure of creating Logging system in Pascal.

Posted in Pascal | Tagged , , | Leave a comment

Crafting an in-memory TDataset descendant in Free Pascal – Lazarus: the ultimate adventure.

This text describes the adventure of creating the TMemTable, a TDataset descendant in Free Pascal and Lazarus.

The TMemTable is an in-memory TDataset which has most of the features a Pascal developer expects from a TDataset:

    • Bookmarks
    • Lookup and Calculated fields
    • Master-detail link
    • Sorting on multiple fields
    • Record Status filtering
    • Range filtering
    • Filtering based on an expression
    • Locate() and Lookup() methods
    • Blobs
    • Load from and Save to XML.

The whole project can be downloaded from github.

There is a text that describes the procedure of creating a TDataset descendant.

The TMemTable demo is in the .\Demos\MemTable folder.

Posted in Pascal | Tagged , , | Leave a comment

AxBcAdmin: an administration tool for any version of Business Central Server on-premises

source code and releases can be found at Github 

Introduction

This application is an administration tool for any version of Microsoft Business Central on-premises and it aims to make it easy to manage the Business Central Server configuration.

Microsoft has retired the native Business Central Admin application in the BC 2022 Release Wave 2 (v21) on-premises version.

A non polite action.

The settings file

Business Central Server configuration settings are stored in a file called CustomSettings.config.

The default location of the CustomSettings.config file is

`C:\Program Files\Microsoft Dynamics 365 Business Central\BC_VERSION\Service`.

That CustomeSettings.config is an XML file with multiple entries like the following

`<add key="ServerInstance" value="BC230" />`

Configuration keys are predefined strings and are described in the Configuring Business Central Server topic, in Microsoft Docs.

The recommended way to manage CustomSettings.config settings is now to use the following PowerShell cmdlet, e.g.

`Set-NAVServerConfiguration -ServerInstance "BC230" -KeyName ServerInstance -KeyValue "BC230_Prod"`

Not acceptable.

The AxBcAdmin features

This application is not as elegant as the good-old Microsoft Business Central Admin tool was, but nevertheless achieves the same goal.

The administrator can

  • start, restart or stop the BC service
  • configure any available setting
  • configure Database Credentials

How AxBcAdmin works

Just uncompress the AxBcAdmin.zip file somewhere in the Business Central server machine and then run the AxBcAdmin.exe.

This application searches Window Services of the local machine, collects any service with a name starting as Microsoft Dynamics 365 Business Central Server and displays these services in a grid.

The administrator may select a BC service to start, restart, stop or edit its configuration. This application writes any setting change directly to CustomeSettings.config file.

The Database Credentials toolbar button displays a dialog box where the administrator may configure the database authentication of the Business Central server.

Posted in NAV/BC | Tagged | Leave a comment

Change collation of an existing Business Central database

If you try to change the collation in a Business Central database you’ll probably get a number of error messages. The result is that it’s almost not possible to change the collation in the current database.

There is procedure though that can be used in changing collation. Just it is not well documented. As is the case with many other issues.

The correct procedure is to create a new empty database with the desired collection. Export the data from the current database. And then import the data in the new database.

Here are all the steps involved in a PowerShell script.

# SEE: https://learn.microsoft.com/en-us/dynamics365/business-central/dev-itpro/cside/cside-change-database-collation
# SEE: https://github.com/MicrosoftDocs/dynamics365smb-devitpro-pb/issues/1096


#######################################################################
# Procedure for changing MsSql collation on a Business Central database
#######################################################################

#--------------------------------------------------------------------------------
# load modules
Import-Module 'C:\Program Files\Microsoft Dynamics 365 Business Central\230\Service\NavAdminTool.ps1'
Clear-Host


#--------------------------------------------------------------------------------
# required variables
$InstanceName = "BC230"
$NewDbName = 'BC'                              
$Collation = 'Greek_100_CI_AS'
$DataFilePath = 'C:/BC/Datafile.navdata'
$DBServer = 'localhost'

#--------------------------------------------------------------------------------
# get a list of installed extensions
# we need the names of the currently installed extensions in BC
# so copied them and keep them to a file, e.g. "Install Microsoft Extensions.txt"
Get-NAVAppInfo -ServerInstance $InstanceName |  Format-Table -Property Name #, Version, Publisher, Scope

#--------------------------------------------------------------------------------
# create an empty database
# or using Sql Server Management Studio create the database with the desired collation and Owner (e.g. Greek_100_CI_AS and DOMAIN/Administrator)
Install-Module -Name "SqlServer"

# remove the database if exists
Invoke-Sqlcmd -Query "USE master; IF DB_ID (N'$NewDbName') IS NOT NULL DROP DATABASE $NewDbName;" -ServerInstance $DBServer

# create the database
Invoke-Sqlcmd -Query "USE master; CREATE DATABASE $NewDbName COLLATE $Collation;" -ServerInstance $DBServer

# create the [NT AUTHORITY\NETWORK SERVICE] in the new database
Invoke-Sqlcmd -Query "USE $NewDbName; CREATE USER [NT AUTHORITY\NETWORK SERVICE] FOR LOGIN [NT AUTHORITY\NETWORK SERVICE];" -ServerInstance $DBServer

# make the [NT AUTHORITY\NETWORK SERVICE] user a db_owner member
Invoke-Sqlcmd -Query "USE $NewDbName; EXEC sp_addrolemember N'db_owner', N'NT AUTHORITY\NETWORK SERVICE';" -ServerInstance $DBServer

#--------------------------------------------------------------------------------
# export data from the old database
Export-NAVData -ServerInstance $InstanceName -IncludeApplication -IncludeApplicationData -IncludeGlobalData -AllCompanies -FilePath $DataFilePath

# import data to the new database, just Microsoft Application data, no Company data
Import-NAVData -DatabaseServer $DBServer -DatabaseName $NewDbName -IncludeApplication -IncludeApplicationData -FilePath $DataFilePath

# connect BC service to the new database
Set-NAVServerConfiguration -ServerInstance $InstanceName -KeyName DatabaseName -KeyValue $NewDbName 

# restart service
Restart-NAVServerInstance -ServerInstance $InstanceName -Verbose

# sync the tenant
Sync-NAVTenant -ServerInstance $InstanceName

# add the fundamental Microsoft extensions, in this order
Sync-NAVApp -ServerInstance $InstanceName -Name 'System Application' -Mode Add  # 1
Sync-NAVApp -ServerInstance $InstanceName -Name 'Base Application' -Mode Add    # 2
Sync-NAVApp -ServerInstance $InstanceName -Name 'Application' -Mode Add         # 3
 
#--------------------------------------------------------------------------------
# Extensions
# use the saved Extension names to create a PowerShell array as the following
# NOTE: remove the 3 Extensions already synced in the previous step
 
$Extensions = @(
"VAT Group Management",
"Send remittance advice by email",
"_Exclude_ClientAddIns_",
"Late Payment Prediction",
"Universal Print Integration",
"_Exclude_Email Logging Using Graph API",
"Essential Business Headlines",
"Payment Links to PayPal",
"_Exclude_APIV1_",
"Simplified Bank Statement Import",
"Bank Account Reconciliation With AI",
"Email - SMTP Connector",
"Data Archive",
"Contoso Coffee Demo Dataset",
"EU 3-Party Trade Purchase",
"Email - Outlook REST API",
"Test Runner",
"_Exclude_ReportLayouts",
"Shopify Connector",
"Email - Current User Connector",
"Performance Toolkit",
"Error Messages with Recommendations",
"OnPrem Permissions",
"Enforced Digital Vouchers",
"Service Declaration",
"Company Hub",
"Statistical Accounts",
"E-Documents Connector with External Endpoints",
"Sales and Inventory Forecast",
"_Exclude_APIV2_",
"Send To Email Printer",
"Intrastat Core",
"Payment Practices",
"Recommended Apps",
"_Exclude_Bank Deposits",
"Troubleshoot FA Ledger Entries",
"Permissions Mock",
"_Exclude_Master_Data_Management",
"_Exclude_PlanConfiguration_",
"Email - Microsoft 365 Connector",
"Review General Ledger Entries",
"SAF-T",
"_Exclude_Business_Events_",
"Data Search",
"API Reports - Finance",
"Email - SMTP API",
"Audit File Export",
"E-Document Core"
)

#--------------------------------------------------------------------------------------------------------------------------------
# execute Sync-NAVApp for all the above extensions
# you may need to run this twice because some extensions may fail because of dependencies
$Extensions | ForEach-Object { Sync-NAVApp -ServerInstance $InstanceName -Name $_  -Mode Add }

#--------------------------------------------------------------------------------------------------------------------------------
# import Company data and Global data to the new database   
Import-NAVData -ServerInstance $InstanceName -IncludeGlobalData -AllCompanies -FilePath $DataFilePath

#--------------------------------------------------------------------------------------------------------------------------------
# restart service
Restart-NAVServerInstance -ServerInstance $InstanceName -Verbose

# just check
Get-NAVApplication -ServerInstance $InstanceName

SEE:

Tested on:

  • Windows Server 2019 Datacenter
  • BC Version: W1 23.4 (Platform 23.0.15712.0 + Application 23.4.15643.15715)
  • Microsoft SQL Server 2019 – 15.0.2104.1
Posted in Databases, NAV/BC, PowerShell | Tagged , | Leave a comment

oVirt and the Bad volume specification error

There are many causes that may lead to a Bad volume specification error in oVirt virtualization.

So the first thing to be done is to review VDSM logs, in the SPM host, and try to determine the cause.

The Log Collector tool may be used in reviewing log files. Also there are virtualization logs that can be found in various locations.

VDSM, vdsm-tool, vdsm-client

The VDSM service is a service used in managing virtualization hosts.

The vdsm-tool is a command line tool for configuring VDSM.

The vdsm-client is a command line tool than can be used in executing commands such as starting and stopping a VM or managing storage and devices.

The virsh command line tool

The virsh is a command line tool, provided by libvirt, for managing VMs.

virsh is powerful. Here is some examples

# VM list along with status
# The -r is needed, in every command, in order to avoid virsh authentication requests
virsh -r list

# Show the XML configuration of a VM along with storage information
virsh -r dumpxml --domain VM_NAME 

# VM gracefully shutdown 
# https://libvirt.org/manpages/virsh.html#shutdown
virsh -r shutdown --domain VM_NAME 

# VM force shutdown
# https://libvirt.org/manpages/virsh.html#destroy
virsh -r destroy --domain VM_NAME 

# Delete a VM
# https://libvirt.org/manpages/virsh.html#undefine
virsh -r undefine --domain VM_NAME 

# Delete a VM and its storage files
virsh -r undefine --domain VM_NAME  --remove-all-storage

# List snaphots and get snapshot names
# https://libvirt.org/manpages/virsh.html#snapshot-list
virsh -r snapshot-list --domain VM_NAME 

# Delete a 
# https://libvirt.org/manpages/virsh.html#snapshot-delete
virsh -r snapshot-delete VM_NAME SNAPSHOT_NAME
virsh -r snapshot-delete --domain VM_NAME --snapshotname SNAPSHOT_NAME

Useful links regarding virsh.

GUIDs and IDs

Most oVirt entities identified using an ID in GUID format.

Here is what IDs are mostly needed and how to get them.

  • VM Id: oVirt portal | Compute | Virtual Machines | VM Name | VM ID
  • Storage Id or Storage Domain Id: oVirt portal | Storage | Disks | ID
  • Storage Pool Id: instructions below
  • Image Group Id: oVirt portal | Storage | Disks | ID

Continue reading

Posted in IT | Tagged | Leave a comment

.Net Generic List and Dictionary in NAV

Introduction

It is possible to use .Net built-in generic classes, such as the generic List and Dictionary, in Microsoft Dynamics NAV.

Microsoft provides some documentation, of the usual quality, of how to use those .Net generic classes in C/AL code.

What the documentation clearly states is the following:

You cannot specify generic type names in C/AL. When a generic is instantiated by a constructor in C/AL, all type names are set to the System.Object type. For example, if you have a mylist DotNet variable for the System.List generic, you create an instance of mylist in C/AL as shown. mylist is instantiated as a List<Object> type.

The above, in a sideway manner, says that it is not possible to define type parameters in C/AL code when using a generic .Net class.

The above is not true. There is code in the Codeunit 6711 OData Action Management which does exactly what the documentation says is not possible.

NOTE: The proposed solution is tested against NAV 2018. I have no means to test it in older versions but I suspect it’ll be functional in versions as back as the NAV 2013.

Preparement

To use a global or local variable in C/AL code it first must be declared using the menu of the NAV Development Environment in View | C/AL Globals or View | C/AL Locals.

The variable’s DataType is always DotNet.

The Subtype is the fully qualified type name of the .Net type. For the generic list this fully qualified type name is the following:

'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Collections.Generic.List`1

Using generics in C/AL without type parameters

The following example displays the use of the generic .Net List as Microsoft’s documentation suggests, i.e. without using type parameters.

PROCEDURE DotNetGenList_1@1();
VAR 
    List1@1000: DotNet "'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Collections.Generic.List`1";
    List2@1001: DotNet "'mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.System.Collections.Generic.List`1";

    Index@2000: Integer;
    V@2001 : Integer;
BEGIN
    Message('Test: Generic List');

    List1 := List1.List();
    List2 := List2.List();

    // using the .Net generic list like that means that
    // the generic .Net List is instantiated as a List&lt;Object&gt; type.
    // SEE: https://learn.microsoft.com/en-us/dynamics-nav/using-generics
    List1.Add('One');
    List1.Add('Two');

    List2.Add(1);
    List2.Add(2);

    // list is 1-based and NOT 0-based as in .Net
    Index := List1.IndexOf('Two');
    V := List2.Item(Index);

    Message('%1', V) ;
END; 

Using generics in C/AL with type parameters

Without type parameters, a generic class uses the .Net Object as parameter. That is something like a List<Object>. A List that it actually has no element type and may accept any value of any type. It is like a list of the Variant type.

There are cases where a code has to be sure that is dealing with a string list, a decimal list, etc. And this can be done only by explicitly defining the type parameters.

Getting the Codeunit 6711 OData Action Management as a guide of how to define type parameters in C/AL code, the conclusion is the following: Continue reading

Posted in NAV/BC | Tagged , | Leave a comment

VLANs and Access and Trunk Ports

Introduction

This text describes some computer networking terms regarding Switches, VLANs, and how VLAN tagging is used in receiving and transmiting Data Link Layer frames by Access and Trunk Ports.

In the OSI model of computer networking the Data Link Layer is the second layer, hence the nicknames Layer 2 and L2.

The L2 transfers data between network entities in a Network Segment. Such entities can be NICs or Switch Ports of the same or other Switches.

Ethernet Frame (or Layer 2 Frame or just Frame)

L2's Data Unit is called Ethernet Frame or just Frame. The Frame is the payload the L2 transfers between network entities and its format is specified by the IEEE 802.3.

L2's Frames are transferred inside the boundaries of a certain LAN. There are other Layers, above the L2, that are responsible of transferring data units between LANs or WANs.

Network Switch

A Network Switch is multi-port device that is used in connecting other devices to one or more networks.

Switches, when receiving or transmitting data, use Packet Switching, i.e. group data into packets where each packet consists of a header and a payload.

Switches use MAC Addresses to transmit Frames in Layer 2. Most today Switches support Layer 3 too, the Network Layer, and can be configured to transmit data using IP Addresses, thus providing Routing functionality.

Often two or more Switches are linked together in order to increase the throughput and other network capabilities. That inter-Switch linking is done using proprietary Link Aggregation protocols, such as Cisco's ISL and DELL's VLT.

Regarding configuration capability there are two types of Switches:

  • Managed, ie. configurable
  • Unmanaged, not configurable.

This text is about configurable Switches.

VLAN (Virtual Local Area Network)

A VLAN is a way so multiple networks to have isolated traffic although they are sharing the same physical networking infrastructure, e.g. Switches, cables, etc. A VLAN forms a discrete Broadcast Domain.

By properly configuring networking equipment, such as Switches and NICs, it is possible to have multiple VLANs co-existing in the same physical networking infrastructure.

Each VLAN is assigned a unique ID, a number between 0-4095, which is called VLAN ID or VID.

A today Switch is a VLAN-aware device. That is, inside a Switch, a L2 Frame always belongs to a certain VLAN and no other.

A VLAN can span multiple switches.

VLAN Tagging

The IEEE 802.1Q is the networking specification about VLANs.

The specification dictates that, inside a VLAN-aware portion of a network, a L2 Frame must belong to a single VLAN only.

That belonging is denoted by tagging the Frame's header with the VLAN ID. If a Frame is not tagged then it is assumed to belong to the Default or Native VLAN (see below).

Access and Trunk Ports

A Switch Port can operate in one of two modes:

  • Access (or Untagged). The Port belongs to a single VLAN only. It does not tag Frames.
  • Trunk (or Tagged). The Port belongs to multiple VLANs. Frame tagging is enabled. Continue reading
Posted in IT | Tagged , | Leave a comment

Python Json: Serialize and Deserialize objects and complex objects

Source code can be found at Github.

Introduction

Json is a language independent way to represent objects as text and reconstruct objects from text.

Json is a lightweight data interchange text format. Using Json an application may save or load objects to/from a medium such as a file or a database blob field, and event post or get objects to/from a web service.

Python support for Json

Python supports Json by providing a set of functions and classes, found in the json package.

  • The dumps() function serializes an object to a Json formatted string.
  • The loads() function deserializes a Json string into a Python dictionary.

Serialization considerations

There are many answers that can be found by searching the internet on "how to do json serialization in Python". Most of them suggest the following:

json.dumps(MyObject, default=lambda o: o.__dict__, indent=4)

Regarding the default parameter the documentation states: "If specified, default should be a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError. If not specified, TypeError is raised."

The proposed solution is fine as long as the object being serialized, and any of its inner objects, provide an internal __dict__.

The date and datetime are types without a __dict__. Furthermore the dumps() function does not handle those two types the way it handles other primitives such as strings and integers.

The following code results in the runtime error: "Object of type WithDate is not JSON serializable". That's because the dumps() function doesn't know what to do with the datetime type.

import json
from datetime import datetime 

class WithDate(object):
    def __init__(self) -> None:
        self.DT = datetime.now()

D = WithDate()
JsonText = json.dumps(D)
print(JsonText)

Deserialization considerations

Deserialization is the reverse of serialization, that is converts a Json string into an object.

The loads() function performs the convertion according to this table. The table cleary states that a Json object is converted to a Python Dictionary.

Consider the following code.

import json
from datetime import datetime 

JsonText = """{ "Language": "Python", "Cool": true }""" 
Result = json.loads(JsonText)
print(type(Result)) # prints <class 'dict'>

In most of the cases the requirements in an application is to convert Json text to an instance of a custom class, say Customer or Invoice or something like that. Not to a Dictionary.

Another serious issue is, again, the date and datetime values. These, according to Json specification, are serialized to strings.

Which is ok as long as they deserialized back to date and datetime values. But that does not happen automatically.

When it comes to deserialization, there is no way to know if the value being deserialized should be converted to date or datetime value. The only thing that maybe is useful is to examine the format of the string value.

The source code

Following are the source code files used in this exercise. Continue reading

Posted in Dev | Tagged , | Leave a comment

Python Events

Source code can be found at Github.

Introduction

Event is "something that happens".

An Event in computer programming is a code construction.

An object, known as Publisher, informs other objects, known as Subscribers or Listeners, that something special is about to happen to it or has happened to it.

The actual cause of broadcasting such an information to Subscribers could be a state change, say the Publisher's Name property changes, or any occurence of some importance, say the Publisher's mouse is clicked.

Subscribers are objects interested in such state changes or important moments in the life of a certain Publisher object.

The Publisher object publishes the name of the event and provides the means to Subscriber objects to attach a proper function, called event handler to that event.

Then when the event takes place in the Publisher, the Publisher invokes the Subscriber's event handler function.

Events in Programming Languages

Events are directly or indirectly supported by a number of Programming Languages.

Java supports events using, the so called, Listener classes.

Javascript DOM objects support events in a, more or less, similar way.

Object Pascal and .Net languages, such as C#, support events too.

In Python the developer has to invent a way to create, publish, subscribe to and invoke events.

The rest of this text describes such a solution.

What is an event actually

An event handler is actually a callback function.

From Wikipedia: In computer programming, a callback or callback function is any reference to executable code that is passed as an argument to another piece of code.

In computer programming every code element resides in a memory address. Everything has a memory address.

If a code knows the memory address of a function (has a reference to that function, i.e. in a variable) it can call the function.

Consider the following.

def Add(a, b):
    return a + b

def Del(a, b): 
    return a - b

def NumberOperation(a, b, Func):
    return Func(a, b)

x = NumberOperation(5, 4, Add)
print(x)

x = NumberOperation(3, 2, Del)
print(x)

The Func parameter maybe Add or Del, effectively passing a reference to the corresponding function.

Note that when passing the Add or Del argument the call operator, which is the () is ommited. Thus the function is not called. Just its address is passed.

The Event class stores event handler function references in its internal _Handlers list.

The source code

Following are the source code files used in this exercise. Continue reading

Posted in Dev | Tagged , | Leave a comment

NAV and Business Central Purchase Approval Workflows

Introduction

A Workflow is a sequence of steps towards the completion of a task.

In NAV and BC a Workflow is a sequence of events.

An Event is triggered when a specified Condition becomes true and leads to a Response.

Approval Workflow

An Approval Workflow is a special Workflow.

The administrator may configure the system so an approval must be required for a specific task to be completed.

The task could be

  • a new Vendor
  • the change a Customer's Credit Limit
  • the change in an Item's Unit Price
  • a new Purchase Order

Steps to configure an Approval Workflow

  • Email Setup
  • Approval Users Setup
  • Workflow Groups Setup
  • Workflow Setup

Email Setup

In NAV, and the first BC versions, Email Setup is done by going to "SMTP Mail Setup" page.

In newer BC versions things have changed, although the old way is still available through the SMTP Connector extension.

In the latest BC versions multiple Email Accounts are used, with one being the default.

Approval Users Setup

The "Approval User Setup" page is a list page where administrator configures users that participate in Workflow Approval Chains.

The "Approval User Setup" page is where all users participating in Workflows, be it Approval Requesters or Approvers, must be set up.

The user A maybe set up to have user B as approver. And user B maybe set up to have user C as approver. And so on. Thus forming an Approval Chain. Continue reading

Posted in NAV/BC, Tutorials | Tagged , | Leave a comment

Microsoft NAV Database Encryption Keys

Introduction

The NAV Server provides two Database Authentication Modes:

  • Windows Authentication
  • SQL Server Authentication

When the NAV Server uses SQL Server Authentication mode to connect to the database, an encryption key is used in order to encrypt the credentials used.

The encryption key can be found at C:\ProgramData\Microsoft\Microsoft Dynamics NAV\[NAV VERSION]\Server\Keys folder.

The encryption key is also stored in the database too in the dbo.$ndo$publicencryptionkey table.

The Problem

Lets say there is Windows server used as the NAV development environment with a local installation of the MsSql server.

The developer restores a newly taken backup to the development server. It then changes the database name in the settings of the NAV Administration tool and restarts the NAV server.

The NAV server refuses to run and instructs the developer to consult the Event Viewer logs.

The developer searches Event Viewer logs at Event Viewer | Application and Services Logs | Microsoft | Dynamics NAV | Server | Admin and finds an error message saying:

The NAV application could not be mounted for database ‘DB NAME’ on database server ‘SERVER NAME’ due to the following error: The Microsoft Dynamics NAV Server instance cannot connect to the application database because it is using a different password encryption key than the one currently used on the database

The Solution

● Backup the encryption key found at C:\ProgramData\Microsoft\Microsoft Dynamics NAV\[NAV VERSION]\Server\Keys. Continue reading

Posted in NAV/BC | Tagged | Leave a comment

Ansible tutorial

Ansible is an open source suite of tools used in application deployment, configuration management and cloud provisioning. Ansible connects to the hosts it manages, using temporary SSH connections. It executes without using agents in the managed hosts. It just pushes modules to the managed hosts, executes a script, called Playbook, and erases the modules. Ansible playbooks are written in YAML.

Ansible runs on many linux distributions and in WSL in MS Windows.

An easy way to test Ansible

An easy way to test Ansible is to use VirtualBox.

Create a control host VM (i.e. the host from which to run the Ansible CLI tools) by using a handy Linux distribution, such as Ubuntu or Linux Mint.

For managed hosts refer to this guide in order to create one or more VMs running a minimal Ubuntu image.

An arrangement

A handy way to work with Ansible is the following.

Install Ansible on control host.

# Ubuntu - Linux Mint
apt update
apt install software-properties-common
add-apt-repository --yes --update ppa:ansible/ansible
apt install ansible # /usr/lib/python3/dist-packages/ansible

# CentOS
yum install epel-release
yum install ansible  

# check the version
ansible --version

Configure static IPs for control and managed hosts. For example

sudo nmcli connection add type ethernet con-name enp0s3 ifname enp0s3 ipv4.addresses 192.168.2.9/24 ipv4.gateway 192.168.2.1

In control host edit the /etc/hosts file, by adding the managed hosts.

127.0.0.1 localhost
127.0.1.1 control-host
192.168.2.9 managed-host-0
192.168.2.10 managed-host-1

Generate a ssh key and copy the ssh key to all managed hosts

ssh-keygen

ssh-copy-id user@managed-host-0
ssh-copy-id user@managed-host-1

In control host, cd to /home/USER and create a work folder, e.g.

cd ~
mkdir Ansible

Inside that work folder place ansible.cfg, hosts and test.yml files with the following content

ansible.cfg

[defaults]
inventory = hosts

hosts file

[servers]
managed-host-0
managed-host-1

test.yml

- name: A test play
  hosts: servers
  tasks:
  - name: Ping hosts
    ansible.builtin.ping:
  - name: Print a message
    ansible.builtin.debug:
      msg: Hello world

Finally, execute the test Playbook.

cd ~/Ansible
ansible-playbook test.yml

Ansible concepts

  • Control host. The host from which Ansible CLI tools are executed.
  • Managed hosts. The target devices (servers, network appliances or any computer) managed with Ansible.
  • Inventory. The list of hosts managed by Ansible.
  • Playbook. A collection of Plays in a YAML file.
  • Play. Contains variables, roles and an ordered list of Tasks. Continue reading
Posted in IT, Tutorials | Tagged | 1 Comment

Microsft Navision and Business Central No. Series guide

This text describes how NAV and BC produce unique values for “auto-numbering” fields such as the No. field of the Customer or Sales Header table. The term Number Series is used to denote this special functionality.

The macinery extends to three built-in tables and requires a Setup table (below).

The No. Series table (308)

The No. Series table is the master table in the Number Series “module”.

The most important fields are the following.

Field Type Notes
Code Code 20 Unique Id for the entry, eg. CUST, BANK or SO
Default Nos. Boolean When true the field is automatically filled with the next number in the series.
Manual Nos. Boolean When true the user is allowed to manually type in the next number in the series.
Date Order Boolean When true the numbers are assigned chronologically.

The No. Series Line table (309)

The No. Series Line table is a detail table to the No. Series table.

The No. Series Line table is where the range of the Number Series is kept.

The most important fields are the following.

Field Type Notes
Series Code Code 20 Points to the Code field of the No. Series table.
Starting No. Code 20 The first “number” in the sequence, e.g. BANK0001
Ending No. Code 20 The last “number” in the sequence, e.g. BANK9999
Warning No. Code 20 A warning message is displayed when this “number” is reached.
Increment-by No. Integer The step of incrementing the numeric part of the “number”.
Last No. Used Code 20 The last consumed “number”.

The No. Series Relationship table (310)

The No. Series Relationship table is a correlation table. It relates two or more entries of the No. Series table.

This table is used in cases where there are more than one Number Series for the same entity, say Customer Domestic and Customer Foreign, and so there is a need to related them. Continue reading

Posted in Dev, NAV/BC | Tagged , | Leave a comment

firewalld on CentOS

firewalld is a Linux daemon used in managing firewall configuration. It is a free and open source solution provided by the https://firewalld.org/.

firewalld is used as the default firewall solution by Red Hat Linux, CentOS and other distributions based on Red Hat. It is also available for several other distributions.

firewalld is a front-end for the iptables packet filtering system provided by the Linux Kernel.

Runtime and Permanent Configuration

firewalld provides two sets of configuration options: runtime and permanent.

The --permanent flag is used to make a setting part of the permanent configuration. Permanent settings become effective only after restarting firewalld or rebooting the system.

A setting without the --permanent flag becomes part of the runtime configuration.

For a setting to become part to both the permanent and runtime configuration, two calls are required: one with the --permanent flag and one without.

The --permanent flag is used when displaying or listing information too. Using the --permanent flag displays permanent configuration settings. Otherwise runtime configuration is displayed.

Zones

A Zone is a group of firewall rules under a unique name, e.g. public. A zone defines a level of trust which rules what traffic is allowed regarding networks the system connects to. A network interface is assigned a zone and that zone defines the allowed traffic. firewalld comes with a set of predefined zones.

Zones have the target attribute. The target defines the default behaviour of a zone, when the incoming traffic cannot be categorized, according to any of the specified rules. A target can either be DROP, ACCEPT, REJECT, or default.

The ACCEPT target is used in trusted zone to accept every packet not matching any rule.

The REJECT target is used in block zone to reject (with default firewalld reject type) every packet not matching any rule.

The DROP target is used in drop zone to drop every packet not matching any rule. If the target is not specified, every packet not matching any rule will be rejected.

Services

A Service is a group of firewall rules under a unique name, e.g. ftp. A service is a configuration regarding Ports, Protocols and firewall helper modules that automatically loaded when the service is enabled. A service’s configuration allows incoming traffic for that service.

There is an *.xml configuration file for each service in the /usr/lib/firewalld/services folder. The filename without the *.xml extension is the service name displayed by the firewall when listing services.

Here is an example of a demo.xml service file. Continue reading

Posted in IT | Tagged , , | Leave a comment

The hosts file (Linux and Windows)

The hosts file exists in both, Linux and Windows.

In Linux can be found at /etc/hosts, while in Windows is placed at %SystemRoot%\System32\drivers\etc\hosts.

The hosts file is a plain text file, editable by any text editor such as nano or Notepad, but only the administrator is allowed to edit it.

The hosts file contains none or more lines, where each line consists of an IP address followed by one or more host names.

127.0.0.1  localhost loopback
::1        localhost

The purpose of hosts file is to provide DNS resolution: the host names of a line map to the IP address of that line. Continue reading

Posted in IT | Tagged , , , | Leave a comment

Install NodeJS on Centos 8 Stream

Node.js is an open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser.

NOTE: Execute the following commands as root user or using sudo.


# uninstall nodejs if it is already installed
dnf remove nodejs

# list all available nodejs modules
# in order to select a version and a profile
dnf module list nodejs

# reset any previously enabled version
dnf module reset nodejs

# enable the version of choise
# so its packages become available for installation
dnf module enable nodejs:16

# install by name, stream and profile
dnf module -y install nodejs:16/common 

# check the version
node -v 

Tested on:

  • CentOS 8 Stream
Posted in IT | Tagged , , , | Leave a comment

The dnf Package Manager

dnf is a Package Manager or Package Management System for Red Hat Linux and Linux distributions derived from Red Hat Linux such as CentOS, Fedora, Rocky Linux and Alma Linux.

A Package Manager, such as dnf is used by a user in order to install, upgrade or remove software packages in a Linux installation.

dnf is the next generation version of the yum Package Manager. dnf is intended to be the replacement for yum in Red Hat and rpm based systems.

NOTE: Execute the following commands as root user or using sudo.

Installation

dnf is pre-installed by default in a CentOS 8 system. If it is not installed, the installation is easy as following.

# install dnf
yum install dnf
 
# check dnf version
dnf --version

General Syntax

dnf [options] <command> [<args>...]

There is a rather long list of commands that can be used, such as intall, remove, upgrade, group, module, list etc.

Basic Usage

 
# install a package
dnf install PACKAGE

# re-install a package
dnf reinstall PACKAGE

# check if a specified installed package has updates available
dnf check-update PACKAGE

# check if any of the installed packages have updates available
dnf check-update

# update a specified package to the latest available version
dnf upgrade PACKAGE

# update all packages to the latest version that is both available and resolvable.
dnf upgrade

# update only absolutely required security patches
dnf upgrade-minimal

# remove one or more packages along with any packages depending on the packages being removed
dnf remove PACKAGE0 [PACKAGE1 PACKAGEn] 

# display information about a package
dnf info PACKAGE

Searching for Packages

Continue reading

Posted in IT | Tagged , , , | Leave a comment

The yum Package Manager

yum is a Package Manager or Package Management System for Red Hat Linux and Linux distributions derived from Red Hat Linux such as CentOS, Fedora, Rocky Linux and Alma Linux.

A Package Manager, such as yum is used by a user in order to install, update, upgrade or remove software packages in a Linux installation.

NOTE: Execute the following commands as root user or using sudo.

Basic Usage

 
# install a package
yum install PACKAGE

# see which installed packages have updates available
yum check-update

# update a package
yum update PACKAGE

# update all packages and their dependencies
yum update

# remove one or more packages
yum remove PACKAGE0 [PACKAGE1 PACKAGEn] 

# display information about a package
yum info PACKAGE

Searching for Packages

Continue reading

Posted in IT | Tagged , , | Leave a comment

Install xrdp on CentOS 8

NOTE: Before installing xrdp add any secondary language you need to your system
by using, NOT an RDP connection, but a direct access to it, i.e. through a console of the virtualization system if it’s a VM

You may need to install EPEL repository, if it is not already installed.

# check if epel-release is installed
sudo yum repolist

# if not, then install it
sudo yum install epel-release

Install xrdp

# install xrdp, enable the daemon and start it
sudo yum install xrdp
sudo systemctl enable xrdp --now
sudo systemctl start xrdp

# check the status
sudo systemctl status xrdp

# add the xrdp in the current zones of the firewall
sudo firewall-cmd --new-zone=xrdp --permanent
 
# add port 3389 to xrdp 
sudo firewall-cmd --zone=xrdp --add-port=3389/tcp --permanent

# google for "what is my public ip address" to find you public IP
sudo firewall-cmd --zone=xrdp --add-source=PUBLIC_IP_ADDRESS --permanent
 
# reload firewall daemon
sudo firewall-cmd --reload
 
# do not forget to restart the service (or the machine)
sudo systemctl restart xrdp

Alas! switching languages not working at all

Switching languages not working at all.

PLEASE: If you find a solution on how to make switching languages to work properly
when connecting from a MS Windows system to CentOS through xrdp, please, let me know.

Tested on:

  • CentOS 8 Stream
Posted in IT | Tagged , , | Leave a comment

Install xrdp on Ubuntu 20.04

NOTE: Before installing xrdp add any secondary language you need to your system
by using, NOT an RDP connection, but a direct access to it, i.e. through a console of the virtualization system if it’s a VM

# install xrdp
sudo apt install xrdp -y
sudo systemctl status xrdp

# the xrdp user must be a member to the "ssl-cert" group
sudo usermod -a -G ssl-cert xrdp 

# if UFW firewall is in use, 
# then you shoud open the RDP port 3389 for your LAN network
# so google for "what is my public ip address" to find you public IP
sudo ufw allow from PUBLIC_IP_ADDRESS to any port 3389
sudo ufw reload

# if you experience the black screen issue 
# then edit the /etc/xrdp/startwm.sh 
sudo nano /etc/xrdp/startwm.sh 

# by adding the following lines
# just before the commands that test and execute XSession

unset DBUS_SESSION_BUS_ADDRESS
unset XDG_RUNTIME_DIR

# do not forget to restart the service (or the machine)
sudo systemctl restart xrdp 

A glitch

Switching languages does not work properly, at least when connecting from a MS Windows 10 system to Ubuntu through xrdp.

If you use the X button, upper right corner, to close the Windows RDP window, then the next time you connect to Ubuntu switching languages stops working.

The work-around I found is to NOT close the RDP window, but to log-out from Ubuntu, which closes the RPD window too by the way.

Tested on:

  • Ubuntu 20.04 with GNOME 3.36.8
Posted in IT | Tagged , , | Leave a comment

Add new user in CentOS with sudo permissions

# execute all the following as root user
# create a new user
useradd USER_NAME

# set the password of the new user
passwd USER_NAME

# add the new user in the sudoers group
# in CentOS the sudo group is called "wheel"
usermod -a -G wheel USER_NAME

# check the status of the wheel group in the sudoers file at /etc/sudoers
# there should be a line as following 
# %wheel	ALL=(ALL)	ALL
nano /etc/sudoers

# or if there is a GUI
gedit admin:///etc/sudoers

# check if the user is in the user group "wheel"
getent group wheel

Tested on:

  • CentOS 8 Stream
Posted in IT | Tagged , | Leave a comment

How to list hardware on Linux – The lshw command

The lshw (List Harware) is a Linux command which provides detailed information on the hardware of a machine.

Use the lshw command to see the exact memory configuration, storage devices, network interfaces, CPU version, etc.

Many Linux distributions come with the lshw already installed. If not then its ease to install it.

# Debian, Ubuntu, Mint
sudo apt-get install lshw

# ReadHat, CentOS, Fedora
sudo yum install lshw

For other distributions just check the Command Not Found portal.

Usage examples

# display help
lshw -help

# display device tree
lshw -short

# display information for a certain device "class", e.g. network interfaces
lshw -class -network

This page contains a list of device classes used with this command.

Tested on:

  • CentOS Stream 8
Posted in IT | Tagged | Leave a comment