Unit 3 Notes of Dot Net Unit 3
Unit 3 Notes of Dot Net Unit 3
An assembly is a compiled code library in .NET that contains compiled code (Intermediate Language -
IL), metadata, and manifest information. Assemblies are the building blocks of .NET applications and
are used for deployment, versioning, and security.
Components of an Assembly:
Executable Code (IL): Assemblies contain compiled code in the form of IL, which is an intermediate
language that is platform-independent.
Metadata: Metadata within an assembly includes information about types, methods, properties, and
other elements. It serves as a detailed description of the contents of the assembly.
Manifest: The manifest is part of the assembly and contains information like version number, culture,
strong name, and references to other assemblies. It is crucial for versioning and security.
Types of Assemblies:
Private Assemblies:
Shared Assemblies:
It allows multiple applications to share and reuse assemblies, promoting code efficiency and
versioning.
Strong Naming:
A strong name includes the assembly's identity, version number, and a public key token, ensuring
uniqueness.
code
// MathLibrary.cs
{
return a + b;
code
// Program.cs
using System;
class Program
Versioning in Assemblies:
code
[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: AssemblyInformationalVersion("1.0.0")]
Delay Signing:
Delay signing allows an assembly to be partially signed during development and fully signed later.
code
[assembly: AssemblyDelaySign(true)]
// Specify the public key token (partially signed)
[assembly: AssemblyKeyFile("KeyFile.snk")]
Summary:
C# assemblies are essential for organizing and deploying code in .NET applications. They provide a
structured way to manage code, ensure versioning, and enhance security. Understanding assembly
concepts is crucial for every .NET developer.
The Global Assembly Cache (GAC) in C# is a central repository for storing and managing shared
assemblies in a computer. The GAC provides a way to share and reuse assemblies across multiple
applications, ensuring a single version of an assembly can be used by various programs. Here's an
overview of the GAC in C#:
Purpose:
The primary purpose of the GAC is to store assemblies that are intended to be shared among
multiple applications on a computer.
Location:
Shared Assemblies:
Assemblies placed in the GAC are considered shared assemblies because they can be accessed by
multiple applications.
Strong Naming:
Assemblies in the GAC must be strongly named. Strong naming involves providing a unique identity
to the assembly using a combination of the assembly's name, version, culture, and a public key
token.
Versioning:
The GAC supports versioning, allowing multiple versions of the same assembly to coexist. Assemblies
in the GAC are identified by their version number.
Gacutil Utility:
The GAC can be managed using the Global Assembly Cache tool (gacutil). This command-line tool is
part of the .NET Framework SDK.
Example Commands:
Security:
Assemblies in the GAC must adhere to certain security policies. This ensures that shared assemblies
are safe for use by multiple applications.
Assuming you have an assembly named MyLibrary.dll and a strong name key file named
MyKeyFile.snk, here's how you might install it to the GAC:
code
Install to GAC:
code
gacutil /i MyLibrary.dll
Code Reusability:
Assemblies in the GAC can be shared across different applications, promoting code reusability.
Versioning:
Global Accessibility:
Assemblies in the GAC are globally accessible, making them available to any .NET application on the
machine.
Security:
Strong naming and adherence to security policies ensure that assemblies in the GAC are secure for
shared use.
Understanding the GAC and its usage is essential for developers who want to create and share
reusable components in the .NET ecosystem.
Threads
In C#, multithreading enables the concurrent execution of multiple threads, allowing for improved
performance and responsiveness in applications. Here's a comprehensive overview of threads in C#:
Key Concepts:
Thread
Creating Threads:
Threads in C# can be created by instantiating the Thread class and passing a delegate (or using
lambda expressions) representing the method to be executed by the thread.
code
myThread.Start();
code
myThread.Start();
Thread Lifecycle:
Threads go through various states, including Unstarted, Running, Waiting, and Stopped.
The ThreadState enumeration provides information about the current state of a thread.
Thread Synchronization:
Synchronization mechanisms, such as locks and monitors, are used to manage access to shared
resources and prevent data corruption in multithreaded environments.
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Main thread exiting.");
In this example, two threads (t1 and t2) are created and started concurrently, each executing the
CountNumbers method. The Join method is used to ensure that the main thread waits for both
threads to complete before continuing.
code
using System;
using System.Threading;
class Counter
lock (lockObject)
count++;
}
class Program
Thread t1 = new Thread(() => { for (int i = 0; i < 5; i++) counter.Increment(); });
Thread t2 = new Thread(() => { for (int i = 0; i < 5; i++) counter.Increment(); });
t1.Start();
t2.Start();
t1.Join();
t2.Join();
In this example, two threads increment a counter in a shared Counter class. The lock statement is
used to ensure that the Increment method is executed atomically, preventing race conditions and
ensuring thread safety.
ThreadPool:
The ThreadPool class provides a pool of worker threads that can be used to execute tasks
asynchronously without the need to create and manage individual threads explicitly.
Summary
Understanding threads is essential for developing responsive and efficient applications in C#.
Managing thread safety, synchronization, and utilizing features like the ThreadPool are crucial for
building robust multithreaded applications.
Context
The term "contexts" is often used in the context of threading to refer to execution contexts or
synchronization contexts. Let's explore these concepts:
Execution Contexts:
Thread Context:
Each thread has a unique identifier (ManagedThreadId) that distinguishes it from other threads.
Security Context:
The security context defines the permissions and privileges associated with a thread. It determines
what actions a thread is allowed to perform.
Locale Settings:
Locale settings include information about culture and language preferences, affecting how the thread
formats dates, numbers, and other locale-dependent data.
code
using System;
using System.Threading;
class Program
myThread.Start();
myThread.Join();
}
In this example, a new thread (myThread) is created, and both the main thread and the new thread
access and display their respective execution context information.
Synchronization Contexts:
code
using System;
using System.Threading;
using System.Threading.Tasks;
class Program
Task.Run(() =>
// Asynchronous operation
context.Post(_ =>
}, null);
});
Console.ReadLine();
Key Concepts:
Isolation:
Each AppDomain has its own memory space and resource boundaries.
Code running in one AppDomain does not directly interfere with code in another AppDomain.
Security:
Code running in one AppDomain is separated from the code in another, preventing unauthorized
access.
Unloadability:
Unlike processes, AppDomains can be unloaded independently of the entire process, allowing for
dynamic loading and unloading of code.
Creating an AppDomain:
code
code
newDomain.DoCallBack(() =>
});
code
AppDomain.Unload(newDomain);
Example:
code
using System;
class Program
newDomain.DoCallBack(() =>
});
AppDomain.Unload(newDomain);
In this example, a new AppDomain named "NewDomain" is created. Code is then executed within
this domain using the DoCallBack method. Finally, the AppDomain is unloaded, releasing its
resources.
Isolation:
Loading and unloading assemblies dynamically during runtime without restarting the entire
application.
Versioning:
Domain Isolation:
Creating isolated environments for different components of a large application.
Considerations:
Cross-Domain Communication:
Communication between AppDomains typically involves using marshaling techniques, like remoting
or other inter-process communication mechanisms.
Performance Overhead:
Creating and unloading AppDomains incurs some performance overhead. It's crucial to consider
whether the benefits of isolation justify this cost.
Security Configuration:
Security policies can be configured for each AppDomain independently to control the permissions
granted to code running within.
AppDomains provide a powerful mechanism for managing the execution of code within a process.
Understanding how to create, use, and unload AppDomains is important for scenarios where
isolation, security, and dynamic code loading are required.
Key Concepts:
Process Class:
The Process class in the System.Diagnostics namespace is the primary class for interacting with
processes in C#.
Starting a Process:
The Process.Start method is used to start a new process. It allows you to specify the executable file
path and other parameters.
code
using System.Diagnostics;
Process.Start("notepad.exe");
Process Information:
The Process class provides information about the started process, such as its ID, name, start time,
and more.
code
Exiting a Process:
code
myProcess.Kill();
Process Lifecycle:
The Process class provides the WaitForExit method to wait for a process to complete its execution.
code
myProcess.WaitForExit();
Processes can communicate with each other using various mechanisms, such as named pipes, shared
memory, or message passing.
The Process class provides StandardInput, StandardOutput, and StandardError properties, allowing
interaction with the standard input, output, and error streams of a process.
code
process.StartInfo.FileName = "myProgram.exe";
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.Start();
process.WaitForExit();
code
using System;
using System.Diagnostics;
class Program
{
static void Main()
FileName = "ping",
Arguments = "google.com",
RedirectStandardOutput = true,
UseShellExecute = false,
CreateNoWindow = true
};
process.Start();
Console.WriteLine(output);
process.WaitForExit();
In this example, a process is started using the ping command, and the standard output is redirected
to the console.
Understanding how to work with processes is essential for scenarios where launching external
applications, managing their lifecycle, and interacting with their input/output streams are required.
Lock Statement:
The lock statement is used to acquire the mutual-exclusion lock for a given object, ensuring that only
one thread can execute the protected code at a time. The syntax is as follows:
code
lock (lockObject)
{
Example:
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
lock (lockObject)
sharedCounter++;
}
}
In this example, two threads (t1 and t2) increment a shared counter (sharedCounter). The lock
statement is used to ensure that only one thread can access the critical section (increment
operation) at a time, preventing data corruption.
Monitor Class:
The Monitor class provides more fine-grained control over synchronization, offering methods like
Enter and Exit to acquire and release locks. The Monitor class is used implicitly by the lock statement.
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Monitor.Enter(lockObject);
try
sharedCounter++;
}
finally
Monitor.Exit(lockObject);
In this example, the Monitor class is used explicitly to demonstrate the equivalent functionality of
the lock statement.
Considerations:
Deadlocks:
Care must be taken to avoid potential deadlocks by ensuring that locks are acquired and released in a
consistent order.
Lock Granularity:
Locks should be applied to the smallest possible critical section to minimize contention and allow for
better parallelism.
Performance Impact:
Excessive use of locks can lead to reduced performance. Consider alternatives like lock-free data
structures or other synchronization mechanisms when applicable.
The lock object should be chosen carefully. It can be any object, but it's often a dedicated private
object for synchronization.
Locks play a crucial role in ensuring thread safety and preventing race conditions in multithreaded
applications. Understanding how to use locks effectively is key to writing robust and concurrent C#
code.
Basic Usage:
The Monitor class provides two main methods for working with monitors: Enter and Exit.
Monitor.Enter:
Acquires an exclusive lock on the specified object, allowing the current thread to enter the critical
section.
code
Monitor.Enter(lockObject);
try
finally
Monitor.Exit(lockObject);
Monitor.Exit:
Releases the lock on the specified object, allowing other threads to enter the critical section.
Example:
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
Console.WriteLine("Final Counter Value: " + sharedCounter);
Monitor.Enter(lockObject);
try
sharedCounter++;
finally
Monitor.Exit(lockObject);
In this example, the Monitor class is used to protect the critical section where the sharedCounter is
incremented. The Monitor.Enter method acquires the lock, and the Monitor.Exit method releases it,
ensuring that only one thread at a time can execute the critical code.
code
lock (lockObject)
while (condition)
}
// Code after condition is false
The Pulse and PulseAll methods are used to signal waiting threads to continue.
Monitor.TryEnter:
code
if (Monitor.TryEnter(lockObject))
try
finally
Monitor.Exit(lockObject);
Considerations:
Exception Handling:
Using a try-finally block is essential to ensure that the lock is released even if an exception occurs.
The lock object should be chosen carefully. It can be any object, but it's often a dedicated private
object for synchronization.
Deadlocks:
Care must be taken to avoid potential deadlocks by ensuring that locks are acquired and released in a
consistent order.
Timeouts:
Monitor.TryEnter allows for specifying a timeout, preventing indefinite waiting for a lock.
Monitors are a powerful tool for managing synchronization in C# applications. They provide a simple
and effective way to protect critical sections of code and coordinate the execution of threads.
Understanding how to use monitors is crucial for writing robust and concurrent C# code.
ReaderWriterLock
ReaderWriterLock is a synchronization primitive that allows multiple threads to have concurrent read
access to a resource, while ensuring exclusive write access. This can be useful when multiple threads
need to read data simultaneously, but write operations must be performed exclusively to maintain
data consistency. Here's an overview of ReaderWriterLock:
Basic Usage:
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
try
rwLock.EnterReadLock();
}
finally
rwLock.ExitReadLock();
try
rwLock.EnterWriteLock();
sharedData++;
finally
rwLock.ExitWriteLock();
In this example, two threads (t1 and t2) demonstrate reading and writing shared data using
ReaderWriterLock. The EnterReadLock and ExitReadLock methods are used for read access, and
EnterWriteLock and ExitWriteLock for write access.
ReaderWriterLockSlim:
code
using System;
using System.Threading;
class Program
t1.Start();
t2.Start();
t1.Join();
t2.Join();
try
rwLock.EnterReadLock();
finally
rwLock.ExitReadLock();
try
rwLock.EnterWriteLock();
sharedData++;
}
finally
rwLock.ExitWriteLock();
Considerations:
ReaderWriterLockSlim provides upgradeable read locks, allowing a thread to acquire a read lock and
then potentially upgrade it to a write lock.
Lock Recursion:
ReaderWriterLockSlim supports lock recursion, allowing a thread to enter the same lock multiple
times.
Performance:
Error Handling:
Always use try-finally blocks to ensure proper lock release, especially in the case of exceptions.
ReaderWriterLock and ReaderWriterLockSlim are valuable tools when dealing with scenarios where
multiple threads need to access shared data concurrently, and a balance between read and write
access is necessary. The choice between the two depends on factors such as performance
considerations and the specific requirements of your application.
Mutex (short for mutual exclusion) is a synchronization primitive used to control access to a shared
resource by multiple threads or processes. A Mutex allows only one thread or process to acquire the
mutex at a time, preventing concurrent access and ensuring data consistency. Here's an overview of
using Mutex in C#:
Basic Usage:
The Mutex class is part of the System.Threading namespace. It provides methods such as WaitOne to
acquire the mutex and ReleaseMutex to release it.
code
using System;
using System.Threading;
class Program
{
static Mutex mutex = new Mutex();
t1.Start();
t2.Start();
t1.Join();
t2.Join();
try
mutex.WaitOne();
// Critical section
sharedData++;
finally
mutex.ReleaseMutex();
}
In this example, two threads (t1 and t2) increment a shared counter (sharedData) within a critical
section protected by a Mutex. The WaitOne method is used to acquire the mutex, and ReleaseMutex
is used to release it.
You can use the WaitOne method with a timeout to prevent indefinite waiting for a mutex. This is
useful in scenarios where waiting for a mutex for too long is not desirable.
code
if (mutexAcquired)
try
// Critical section
sharedData++;
finally
mutex.ReleaseMutex();
else
Named Mutex:
A named mutex can be used to synchronize threads or processes across multiple applications.
Named mutexes have a global scope and can be accessed by different processes.
code
Considerations:
Exception Handling:
Always use try-finally blocks to ensure proper release of the mutex, even in case of exceptions.
Named mutexes can be used for synchronization between threads in different processes.
Timeouts:
Mutex Scope:
A mutex can be either local or named. Named mutexes have a global scope.
Mutex is a versatile synchronization primitive that provides a simple and effective way to control
access to shared resources. It is often used in scenarios where multiple threads or processes need to
coordinate access to a critical section of code or shared data.
Thread pooling in C# is a mechanism provided by the .NET Framework for managing and reusing a
pool of worker threads. Instead of creating a new thread for each task, a thread pool allows you to
use existing threads from the pool, leading to better performance and resource utilization. The
ThreadPool class in the System.Threading namespace provides functionalities for working with
thread pools.
Basic Concepts:
Thread Pooling:
A thread pool is a pool of worker threads that are managed by the runtime.
Threads in the pool are reused for multiple tasks, reducing the overhead of thread creation and
destruction.
ThreadPool Class:
The ThreadPool class provides static methods for queuing work items, controlling the number of
worker threads, and more.
Basic Usage:
code
using System;
using System.Threading;
class Program
Console.ReadLine();
In this example, the QueueUserWorkItem method is used to queue a work item (the WorkerMethod)
to the thread pool. The thread pool manages the execution of this work item on an available worker
thread.
code
using System;
using System.Threading;
using System.Threading.Tasks;
class Program
Console.ReadLine();
}
}
In this example, Task.Run is used to run a method on the thread pool. Under the hood, Task.Run
utilizes the thread pool for executing the specified delegate.
ThreadPool Configuration:
code
ThreadPool.SetMinThreads(5, 5);
ThreadPool.SetMaxThreads(10, 10);
These methods allow you to query and set the minimum and maximum number of worker threads in
the thread pool.
code
SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
await Task.Delay(2000);
});
The synchronization context of the thread pool can be set, allowing asynchronous callbacks to
execute within the context of the thread pool.
Considerations:
Long-Running Tasks:
Avoid blocking the thread pool threads to prevent performance degradation. Use asynchronous
patterns instead.
The thread pool is a powerful mechanism for managing threads in C#, providing a balance between
simplicity and efficiency. It is well-suited for scenarios where many short-lived tasks need to be
executed concurrently.