0% found this document useful (0 votes)
16 views

Nanobiotechnology Module 4

Uploaded by

sumeetkhare7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Nanobiotechnology Module 4

Uploaded by

sumeetkhare7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Sumeet K

Course 1: Nanobiotechnology
Module 4: Properties and Characterization of nanoparticles
(Tips: 1. Diagram wherever necessary / 2. For every Microscopy technique study necessary diagram, principle, working, applications mostly
related to biotechnology, advantages, limitations and sample preparation)

Q. Microscopy/ Explain what is microscopy


> Microscopy is an analytical technique of investigating small objects and structures using a microscope.
> In microscopy a device called microscope is used which magnifies objects to study their structure, morphology,
and other characteristics.
> There are various types of microscopy, including light, electron, scanning probe, fluorescence, and confocal
microscopy, etc. each with its own unique applications and benefits.
> Microscopy has a wide range of applications in biological research, medical diagnosis, materials science, quality
control, and forensic science, allowing for the study of cells, tissues, microorganisms, materials, and evidence.
> The key benefits of microscopy include high resolution, non-destructive testing, real-time imaging, and
quantitative analysis, enabling researchers and scientists to gain valuable insights and make accurate
measurements.

Q. Scanning electron microscopy (SEM)

> The scanning electron microscope is a powerful microscopic tool that utilizes electrons to form a magnified image of
specimen under study.
> It is a powerful magnification tool that produces high-resolution, three dimensional images which provide
information on the topography, morphology and composition of the sample/specimen.
> The information so obtained helps assist a large number of science and industry applications.
> The SEM was developed by Dr. Charles Oatlev.

Principle :
> The principle of Scanning Electron Microscopy (SEM) revolves around the use of a focused beam of high-energy
electrons to scan the surface of a sample.
> When these electrons interact with the atoms in the sample, they produce various signals that can be detected and
analysed.
> These signals provide detailed information about the sample's surface topography, composition, and other
properties.
> By collecting and analysing the emitted secondary electrons, backscattered electrons, and characteristic X-rays, SEM
creates highly magnified images, allowing researchers to observe and study the fine details and structures of the
sample with high resolution and depth of field.
> SEM's capability to generate detailed and three-dimensional images makes it a valuable tool in fields such as material
science, biology, and nanotechnology.

Working :
> Electron guns placed at the top of the column produce high energy electrons. These are accelerated down and
allowed to pass through a combination of electromagnetic lenses. The lenses help to produce a focused electron beam.
> The electron beam moves across a vertical path through the microscope, in the presence of vacuum.
> The sample chamber area is also evacuated by a combination of pumps. The sample is placed inside this chamber.
> The scanning coils are adjusted to allow the electron beam to be focused on the sample surface. Beam scattering
enables information for the sample to be collected on a defined area on which the beam has been focused.
> The operator can adjust the beam through a computer to control magnification and surface area to be scanned.
> Interaction between the incident electrons and the sample surface leads to the release of a number of energetic
electrons from the sample surface.
> The interaction leads to specific scattering of electrons (e.g., backscattered electrons, secondary electrons etc.)
which can provide information on size, shape, texture and composition of the sample.
> The electrons are collected by detectors and converted into a signals.
> The signals are sent to a screen to produce a final black and white 3-dimensional image.

SEM sample preparation:


> For biological specimens, dehydration is important. The removal of water from sample is needed because the water
would vaporize in the vacuum.
> The samples are gradually treated in the presence of increasing concentrations of acetone (10%, 30%, 50%, 70% and
100%) to dehydrate the sample.
> In case the sample to be observed under SEM is a non-metal, it has to be made conducting.
> This is achieved by covering the sample with a thin layer of conductive material such as gold.
> A device called a "sputter coater” which operates in the presence of electric field and argon gas is used to form this
gold coating.
> The argon gas and an electric field removes an electron from the argon atoms, making them positively charged. The
argon cations are then attracted to a negatively charged gold foil, where they knock down gold atoms from the surface
of the foil.
> The gold atoms therefore settle onto the sample surface to produce a gold coating.

Advantages of SEM:
> Three-dimensional imaging and topographical, morphological and compositional information obtained
> User-friendly, fast and easy to operate
> It provides higher resolution as well as larger area to be focused at one time as compared to traditional microscopes.
> Increased control on the degree of magnification obtained in SEM.

Disdvantages of SEM:
> Expensive, large in size, requires proper housing and maintenance.
> Operators required to be specially trained for operation, data analysis and interpretation.
> Samples must be stable in vacuum and handle the vacuum pressure.
> There is a mild risk of exposure to radiation, with electrons that have scattered from beneath the sample surface.
> Non-conducting sample to be observed under conventional SEM need to be specifically coated with electrically
conducting material.
> It does not provide information on living samples. It is only applicable for fixated samples, which are not living.
Applications of SEM:
> Scanning Electron Microscopy (SEM) is widely utilized across multiple fields due to its ability to provide
high-resolution images and detailed information about sample surfaces.
> In material science, SEM helps analyze the microstructure and composition of various materials, aiding in
the development of new alloys, polymers, and composites.
> In biology, it allows researchers to study the surface morphology of cells, tissues, and microorganisms in
great detail.
> SEM is also crucial in nanotechnology, where it is used to investigate and characterize nanostructures and
their properties.
> Forensic science benefits from SEM by enabling detailed examination of trace evidence, such as fibers,
residues, and gunshot residues, thereby supporting criminal investigations.
> Additionally, SEM is employed in the semiconductor industry for inspecting and analyzing the microscopic
features of electronic components and circuits, ensuring quality control and advancing microfabrication
techniques. Overall, SEM's versatility and precision make it indispensable in research, industry, and
technology development.

Q. Transmission electron microscopy (TEM)

> Transmission electron microscopy (TEM) is an analytical technique used to visualize the smallest
structures in matter.
> Unlike optical microscopes, which rely on light in the visible spectrum, TEM can reveal stunning detail at
the atomic scale by magnifying nanometre structures up to 50 million times.
> This is because electrons can have a significantly shorter wavelength (about 100,000 times smaller) than
that of visible light when accelerated through a strong electromagnetic field, thus increasing the microscope
resolution by several orders of magnitude.
> TEM was invented by German scientists Ernst Ruska and Max Knoll in 1931.

Principle:
> Transmission Electron Microscopy (TEM) operates on the principle of transmitting a beam of electrons
through an ultra-thin specimen to form an image.
> As the electron beam passes through the specimen, it interacts with the sample's atoms, causing
scattering and diffraction.
> The transmitted and scattered electrons are then collected by a series of electromagnetic lenses, which
focus them to create a detailed and high-resolution image on a fluorescent screen or digital detector.
> This process allows TEM to reveal the internal structure and composition of the sample at an atomic or
molecular level, making it an invaluable tool for studying the fine details of cells, tissues, materials, and
nanostructures.

Working:
> Transmission Electron Microscopy (TEM) works by transmitting a beam of high-energy electrons through
an extremely thin specimen, typically only a few nanometers thick.
> As the electron beam travels through the sample, electrons interact with the atoms within, resulting in
scattering and diffraction.
> The scattered electrons carry information about the internal structure of the sample and are captured by
an array of electromagnetic lenses that focus them to create a highly magnified image.
> This image is formed on a fluorescent screen or detected digitally, revealing fine structural details at
atomic or molecular levels.

TEM Sample Preparation:


> The process typically begins with fixation, where the sample is stabilized using chemicals such as
glutaraldehyde or osmium tetroxide to preserve its structure.
> Next, the sample is dehydrated using solvents like ethanol or acetone, and then embedded in a resin,
such as epoxy or acrylic, to provide support and stability.
> The embedded sample is then sectioned into thin slices, typically around 50-100 nanometers, using an
ultramicrotome.
> Finally, the sections are placed on a TEM grid, stained with heavy metals like uranium or lead to enhance
contrast, and coated with a thin layer of carbon to prevent charging under the electron beam.

Advantages of TEM:
> Offers the highest and most powerful magnification of any microscopy technique.
> Versatile imaging modes: dark/bright field and phase contrast (TEM); high-angle annular dark field (STEM).
> Provides the ability to collect electron diffraction patterns (crystallographic information) from nanometer-
sized regions by using selected-area diffraction (SAD).
> Enables nano-analysis: the ability to collect local information about composition and bonding which can be
correlated to high resolution images.

Disadvantages of TEM:
> Limited sampling: a typical field of view for a HR-TEM image is no more than 100 nm2.
> Difficult sample preparation to create extremely thin specimens; thin samples often result in imaging

artifacts.
> Vacuum environment required.

Applications:
> Transmission Electron Microscopy (TEM) is a vital tool in nanobiotechnology, enabling researchers to
explore and manipulate biological structures at the nanoscale.
> By providing high-resolution images of biological specimens, such as proteins, viruses, and cellular
organelles, TEM allows scientists to observe the intricate details of these nanoscale structures.
> This detailed imaging is essential for understanding the molecular architecture and interactions within
cells, aiding in the development of nanomedicines and targeted drug delivery systems.
> TEM also plays a critical role in characterizing nanoparticles and nanomaterials used in biomedical
applications, ensuring their efficacy and safety.
> Overall, TEM's ability to reveal the fine details of biological and nanostructured materials significantly
advances the field of nanobiotechnology, facilitating breakthroughs in medical research and innovative
therapies.
Q. Scanning Tunnelling Microscopy:

> Scanning Tunneling Microscopy, or STM, is an imaging technique used to obtain ultra-high resolution
images at the atomic scale, without using light or electron beams.
> STM was invented in 1981 by two IBM scientists named Gerd Binnig and Heinrich Rohrer.

Principle:
> The principle of Scanning Tunneling Microscopy (STM) is based on the phenomenon of quantum
tunneling, where electrons can pass through a potential energy barrier, allowing for the detection of tiny
changes in surface topography.
> In STM, a sharp probe tip is brought extremely close to the surface of a sample, typically within 1-10
angstroms, creating a tunneling current between the tip and the sample.
> As the probe tip is scanned across the surface, the tunneling current varies in response to changes in the
surface topography, allowing for the creation of high-resolution images of the surface with atomic-scale
resolution.

Working:
> In Scanning Tunneling Microscopy (STM), a sharp probe tip is brought into close proximity with the
surface of a sample, and a voltage bias is applied between the tip and the sample.
> As the tip is scanned across the surface, the tunneling current that flows between the tip and the sample
is measured, and this current is used to create a topographic image of the surface.
> The STM's control system uses a feedback loop to maintain a constant tunneling current, adjusting the
height of the tip above the surface as needed.
> This allows the STM to produce high-resolution images of the surface, with atomic-scale resolution, and
to detect subtle changes in surface topography.
> The resulting image is a map of the surface topography, with bright areas indicating higher regions and
dark areas indicating lower regions.

Sample Preparation:
> The sample is first cleaned to remove any contaminants or debris, often using solvents such as acetone
or ethanol.
> Next, the sample may undergo additional surface preparation techniques, such as sputtering or etching,
to remove any oxide layers or other impurities.
> To ensure conductivity, the sample may be coated with a thin layer of conductive material, such as gold
or platinum.
> Finally, the sample is mounted on a sample holder and inserted into the STM chamber, where it is
evacuated to a high vacuum to prevent contamination and ensure stable imaging conditions.
> In some cases, samples may also undergo in-situ preparation, such as cleaving or annealing, within the
STM chamber to produce a fresh, clean surface.

Advantages of STM:
> High-resolution imaging with atomic-scale resolution, allowing researchers to visualize individual atoms
and molecules on surfaces.
> STM also provides real-time imaging capabilities, enabling the study of dynamic surface processes, such
as chemical reactions and surface diffusion.
> Additionally, STM is a non-destructive technique, allowing samples to be imaged multiple times without
damage.
> The technique also requires minimal sample preparation, and can be used to study a wide range of
materials, including metals, semiconductors, and biomolecules.
> Furthermore, STM can operate in various environments, including ultrahigh vacuum, air, and liquids,
making it a versatile tool for research in fields such as materials science, chemistry, and biology.

Disadvantages of STM:
> Requirement for a conductive sample surface, which limits its application to non-conductive materials.
> STM is sensitive to vibrations, temperature fluctuations, and other environmental factors, which can affect
image quality and stability.
> STM has a limited scan size, typically ranging from a few nanometers to a few micrometers, which can
make it difficult to study large-scale surface features.
> STM can be a slow technique, with scan times ranging from minutes to hours, depending on the
resolution and scan size.

Applications of STM:
> Scanning Tunneling Microscopy (STM) has numerous applications, including the imaging and
manipulation of biomolecules, such as DNA, proteins, and cells, at the nanoscale.
> STM is used to study the structure and dynamics of biomolecules, including their conformation,
orientation, and interactions with surfaces.
> STM is employed in the development of nanoscale biosensors, which can detect specific biomolecules or
biochemical reactions.
> STM is used to study the interactions between biomolecules and nanomaterials, such as nanoparticles
and nanotubes, which is crucial for the development of nanomedicines and nanotherapeutics.
> STM is applied in the field of nanotoxicology, where it is used to study the interactions between
nanomaterials and cells, and to assess the potential toxicity of nanomaterials.

Q. Atomic Force Microscopy

> Atomic force microscopy (AFM) was invented by Binning in 1986.


> Atomic force microscopy is a scanning probe microscopy technique that uses a sharp tip to measure and image
materials at the atomic and nano scales.
> AFM is used to map and study the three-dimensional surface topography at micro/nano-meter scale for various
types of materials including metals, semiconductors, soft biological samples, and conductive and non-conductive
materials, in different environments of air, liquid, and vacuum.
> AFM can generate images at atomic resolution with height resolution at angstrom scale precession.

Principle:
> The principle of Atomic Force Microscopy (AFM) is based on the interaction between a sharp probe tip and the
surface of a sample.
> The probe tip is attached to a flexible cantilever, which is scanned across the sample surface.
> As the tip interacts with the sample, it experiences a force, such as van der Waals, electrostatic, or magnetic forces,
which causes the cantilever to deflect.
> This deflection is measured by a laser beam that is reflected off the back of the cantilever, allowing for the creation
of a high-resolution topographic image of the sample surface.
> By controlling the force between the tip and the sample, AFM can operate in various modes, including contact mode,
tapping mode, and non-contact mode, enabling the imaging of a wide range of samples, from soft biological tissues to
hard materials.

Working:
> In Atomic Force Microscopy (AFM), a sharp probe tip, typically made of silicon or silicon nitride, is attached to a
flexible cantilever, the cantilever is mounted on a piezoelectric scanner, which allows for precise movement in the x,
y, and z directions.
> The sample to be imaged is placed on a stage, which is also mounted on the scanner.
> As the scanner moves the cantilever and sample relative to each other, the probe tip interacts with the sample
surface, causing the cantilever to deflect.
> This deflection is measured using a laser beam, which is reflected off the back of the cantilever onto a photodetector.
> The photodetector converts the changes in the reflected laser beam into an electrical signal, which is then processed
by the AFM's control system.
> The control system uses this signal to generate a high-resolution topographic image of the sample surface, with
resolutions down to the atomic scale.

Sample Preparation:
> Sample preparation typically involves cleaning and flattening the sample surface to ensure optimal imaging
conditions.
> For rigid samples, such as metals, ceramics, and semiconductors, cleaning with solvents like acetone, ethanol, or
water may be sufficient. For softer samples, like polymers or biological tissues, more gentle cleaning methods, such as
blowing with nitrogen or using a soft brush, may be necessary.
> Additionally, samples may need to be dried or frozen to prevent degradation or contamination.
> Samples with rough or irregular surfaces, techniques like polishing, etching, or coating with a thin layer of material
may be employed to achieve a smoother surface.
> Overall, the specific sample preparation steps will depend on the sample's properties and the desired imaging
conditions.

Advantages:
> Atomic Force Microscopy (AFM) offers high-resolution imaging with nanoscale resolution, allowing researchers to
visualize surface topography, composition, and properties at the atomic level.
> AFM is also a non-destructive technique, enabling the imaging of fragile or sensitive samples without damage.
> AFM can operate in various environments, including air, liquids, and vacuum, making it a versatile tool for studying
a wide range of samples, from biological tissues to materials.
> AFM is relatively easy to use and maintain, with minimal sample preparation required, making it a popular choice
for researchers across various disciplines, including materials science, biology, and nanotechnology.
Disadvantages:
> Requirement for a relatively flat and smooth sample surface, as rough or irregular surfaces can be difficult to image
accurately.
> AFM is a slow technique, with scan times ranging from minutes to hours, depending on the resolution and scan size.
> The technique is also sensitive to vibrations, temperature fluctuations, and other environmental factors, which can
affect image quality and stability.
> AFM tips can be prone to wear and tear, requiring frequent replacement, and the technique can be limited in its
ability to image large areas or thick samples.
> Interpreting AFM images can require expertise and specialized knowledge, which can be a limitation for some
researchers.

Applications:
> Atomic Force Microscopy (AFM) has a wide range of applications, including the imaging and characterization of
biomolecules, such as DNA, proteins, and cells, at the nanoscale.
> AFM is used to study the structure and dynamics of biomolecules, including their conformation, interactions, and
binding mechanisms.
> AFM is employed in the development of nanoscale biosensors, which can detect specific biomolecules or biochemical
reactions.
> AFM is also used to study the interactions between biomolecules and nanomaterials, such as nanoparticles and
nanotubes, which is crucial for the development of nanomedicines and nanotherapeutics.

Q. UV – Vis Microscopy

> Ultraviolet (UV)-visible spectroscopy is a technique that measures how much ultraviolet and visible light a
substance absorbs.
> UV-Visible spectroscopy deals with the study of the electronic transitions of molecules as they absorb light in the
UV (190-400 nm) and visible regions (400-800 nm) of the electromagnetic spectrum.
> The absorption of ultraviolet or visible radiation lead to transition among electronic energy levels, hence it is also
often called electronic spectroscopy.

Principle:
> The principle of UV-Vis microscopy is based on the interaction between ultraviolet (UV) and visible (Vis) light and the
sample being analysed.
> In UV-Vis microscopy, a beam of UV-Vis light is passed through the sample, and the transmitted or reflected light is
measured.
> The principle of UV-Vis microscopy is based on the interaction between ultraviolet (UV) and visible (Vis) light and the
sample being analysed.
> In UV-Vis microscopy, a beam of UV-Vis light is passed through the sample, and the transmitted or reflected light is
measured.
> The sample absorbs certain wavelengths of light and transmits or reflects others, resulting in a characteristic
absorption or reflection spectrum.
> The absorption or reflection spectrum is then used to identify the sample's chemical composition, structure, and
concentration.
> The UV-Vis microscope uses a combination of optics, spectroscopy, and imaging techniques to visualize the sample's
absorption or reflection properties at different wavelengths, allowing for the analysis of samples at the microscale.

Working:
> In UV-Vis microscopy, the interaction between ultraviolet (UV) and visible (Vis) light and the sample is analysed.
> The process begins with the illumination of the sample by a broad-spectrum UV-Vis light source, typically a xenon or
mercury arc lamp.
> The light is then focused onto the sample using a combination of lenses and mirrors, which helps to optimize the
illumination and reduce glare.
> As the light interacts with the sample, certain wavelengths are absorbed by the sample's molecules, while others are
transmitted or reflected.
> The transmitted or reflected light is then collected by a detector, typically a photomultiplier tube (PMT) or a charge-
coupled device (CCD) camera.
> The detector measures the intensity of the transmitted or reflected light as a function of wavelength, generating a
UV-Vis spectrum that is characteristic of the sample's molecular structure and composition.

Sample Preparation:
> The sample must be properly cleaned and dried to prevent contamination and interference.
> For solid samples, this may involve polishing or grinding to create a smooth surface.
> For liquid samples, a solvent such as water or ethanol may be used to dissolve the sample, and the solution may
need to be filtered to remove impurities.
> The sample may also need to be diluted to an appropriate concentration, typically in the range of 0.1-10 mg/mL.
> The sample may require stabilization or fixation to prevent degradation or changes in its chemical structure during
analysis.
> Finally, the sample is typically placed in a specialized cell or cuvette, which is designed to hold the sample in place
and allow for optimal transmission of light.
> The cell or cuvette is then placed in the UV-Vis microscope, and the analysis can begin.

Advantages:
> High sensitivity and specificity, allowing for the detection and identification of small amounts of substances.
> Non-destructive and non-invasive method for analysing samples, making it ideal for studying delicate or rare
materials.
> UV-Vis microscopy is a relatively fast and simple technique, requiring minimal sample preparation and allowing for
rapid data acquisition.
> Provides a wide range of information, including molecular structure, chemical bonding, and electronic transitions,
making it a powerful tool for understanding the chemical and physical properties of materials.

Disadvantages:
> Limited spatial resolution.
> The technique is sensitive to sample preparation and instrumental conditions, which can affect the accuracy and
reliability of the results.
> Limited to analysing samples that absorb or transmit light in the UV-Vis range, which can exclude certain types of
samples, such as opaque or highly scattering materials.
> Susceptibility to interference from impurities, solvents, or other substances that can absorb or scatter light, which
can complicate data interpretation.
> Requires specialized and expensive instrumentation.
> Analysis can be time-consuming and labour-intensive, especially for complex samples.
Applications:
> Characterization of nanoparticles, such as quantum dots, nanorods, and nanocrystals, which are used in biomedical
imaging, diagnostics, and therapy.
> Used to study the interactions between nanoparticles and biomolecules, such as proteins, DNA, and cells, which is
crucial for understanding the toxicity and efficacy of nanomaterials in biological systems.
> UV-Vis microscopy is applied in the development of nanoscale biosensors, which can detect specific biomolecules or
biochemical reactions, and in the study of cellular behaviour, such as cell signalling, migration, and differentiation,
which is important for understanding cellular processes and developing new therapeutic strategies.
> used to analyse the optical properties of nanostructured materials, such as nanowires, nanotubes, and nanoarrays.

Q. Dynamic Light Scattering

> Dynamic light scattering (DLS), also known as photon correlation spectroscopy or quasielastic light scattering, is a
technique used to measure the flow and diffusive properties of scattering particles in suspension or polymersin
solution.
> DLS is a technique used for measuring the size of particles/biomolecules typically in the sub-micron region.
> Usually DLS is concerned with measurement of size of particles suspended in a liquid medium.

Principle:
> DLS measures the speed at which the particles are diffusing due Brownian motion of the macromolecules in the
suspension which is relates to size of the particles.
> Brownian motion defines random thermal motion of the particles that occur due to the bombardment by the solvent
molecules surrounding them.
> The larger the particle size, the slower the Brownian motion will be, smaller particles are hit out further by the
solvent molecules and move faster than the larger one.
> The intensity of scattered light fluctuates defines the diffusion rate of the particles.
> DLS measures alteration in scattered light intensity with time at a fixed scattering angle (typically 90º), whereas static
light scattering measures scattered light intensity as a function of angle.
> The velocity of the Brownian motion is defined by translational diffusion coefficient (usually given the symbol, D).
The particle size can be estimated by using the Stokes- Einstein equation.

> With the help of Stroke-Einstein equation, DLS can be used for the resolution of hydrodynamic diameter and
ellipticity of the particles in the suspension.
Working:
> DLS involves illuminating the sample with a laser beam, which causes the particles to scatter the light in different
directions.
> The scattered light is then detected by a photodetector, which measures the intensity of the scattered light as a
function of time.
> The intensity of the scattered light fluctuates due to the Brownian motion of the particles, which causes them to
move randomly and scatter the light in different directions.
> By analysing these fluctuations, DLS can determine the size and size distribution of the particles, as well as their
polydispersity index (PDI), which is a measure of the breadth of the size distribution.
> The technique is commonly used to characterize nanoparticles, proteins, and other biomolecules, and is particularly
useful for measuring particle sizes in the range of 1-10,000 nanometers.

Sample Preparation:
> The sample must be dissolved or dispersed in a suitable solvent, such as water or buffer, to create a homogeneous
solution.
> Concentrations between 0.1-10 mg/mL are suitable.
> The sample should also be filtered to remove any dust, aggregates, or other contaminants that can interfere with
the measurement.
> A 0.2-0.5 μm filter is commonly used.
> Some samples, such as proteins or nanoparticles, may require additional preparation steps, such as centrifugation
or sonication, to ensure proper dispersion and stability.
> The sample should be loaded into a suitable cuvette or cell, typically made of glass or quartz, and placed in the DLS
instrument for measurement.

Advantages:
> High sensitivity and accuracy in measuring particle sizes and size distributions.
> Non-invasive and non-destructive technique.
> Relatively fast technique, with measurement times ranging from seconds to minutes, making it ideal for high-
throughput analysis and quality control applications.
> Can measure particle sizes over a wide range, from 1-10,000 nanometers.
> Versatile and can be applied to a wide range of samples, including nanoparticles, proteins, polymers, and colloids, in
various solvents and under different conditions.

Disadvantages:
> Sensitivity to sample polydispersity, which can lead to inaccurate results if the sample contains a wide range of
particle sizes.
> DLS is susceptible to interference from dust, aggregates, and other contaminants, which can scatter light and affect
the measurement.
> DLS requires careful control of experimental conditions, such as temperature, concentration, and solvent properties,
which can be time-consuming and challenging.
> DLS can be prone to errors due to multiple scattering, which can occur when the sample is too concentrated or the
particles are too large, leading to inaccurate results.

Applications:
> Dynamic Light Scattering (DLS) has a wide range of applications, including the characterization of nanoparticles, such
as liposomes, micelles, and quantum dots, which are used in drug delivery, imaging, and diagnostics.
> DLS is used to measure the size, size distribution, and zeta potential of nanoparticles, which are critical parameters
in determining their stability, toxicity, and efficacy.
> DLS is applied in the study of protein-nanoparticle interactions, which is essential for understanding the behaviour
of nanoparticles in biological systems.
> DLS is applied in the characterization of biomolecules, such as proteins and nucleic acids, which is essential for
understanding their structure and function.
Q. X – Ray Diffraction

> X-ray diffraction (XRD) is a non-destructive analytical technique that uses X-rays to determine the structural
properties of materials.
> It helps to identify a contaminant or corrosion product.
> It also helps in identification of foreign phases for purity analysis of crystalline powders.

Principle:
> X-ray diffraction analysis is based on the constructive interference of monochromatic X-RAYS and the
crystalline sample.
> Cathode ray tube generates the X-rays which are then filtered to produce monochromatic radiation which
then collimated to concentrate and then directed toward the sample.
> When the Bragg’s condition is satisfied then the interaction of the incident ray with the sample produces
constructive interference.
> Bragg’s law relates the wavelength of the electromagnetic radiation with the lattice spacing in the crystalline
sample and also with the diffraction angle.
> The unique x-ray diffraction pattern generated in the xrd analysis provides a unique fingerprint of the
crystals.
> With comparison with the standard reference patterns and proper interpretation, this fingerprint allows the
identification of the crystalline form.

Working:
> The working of XRD involves directing a beam of X-rays at a sample, which scatters the X-rays in different
directions.
> The scattered X-rays then interact with the atoms in the sample, producing a diffraction pattern that is
characteristic of the sample's crystal structure.
> The diffraction pattern is then detected by a detector, which measures the intensity of the scattered X-rays
as a function of the scattering angle.
> By analysing the diffraction pattern, XRD can provide information on the sample's crystal structure,
including the arrangement of atoms, the size and shape of the unit cell, and the presence of defects or
impurities.

Sample Preparation:
> Powder used for X-ray diffraction Analysis should be non-destructive.
> Minimum 0.25 gm of sample can be used for the analysis.
> For routine analysis only 1 gm is sufficient.
> In case of well crystalline material only 1 mg sample is enough.

Advantages:
> Ability to provide detailed information about the crystal structure, composition, and defects of materials.
> XRD is a non-destructive technique, allowing for the analysis of samples without altering their structure or
composition.
> Highly sensitive and can detect subtle changes in the material's crystal structure.
> XRD can also analyze a wide range of materials, including metals, ceramics, polymers, and biomaterials.
Disadvantages:
> Requirement for crystalline samples, as amorphous materials do not produce a diffraction pattern.
> XRD is sensitive to sample preparation, and improper preparation can lead to inaccurate results.
> May be expensive and inaccessible technique for some researchers.

Applications:
> Characterization of nanostructured materials, such as nanoparticles, nanocrystals, and nanothin films,
which are used in biomedical applications, such as drug delivery, imaging, and diagnostics.
> Applied in the study of protein-nanoparticle interactions, which is essential for understanding the
biocompatibility and toxicity of nanoparticles.
> Applied in the development of new nanomaterials and nanostructures for biomedical applications, such
as nanocarriers for targeted drug delivery and nanosensors for disease diagnosis.

Q. FTIR

> Fourier Transform Infrared Spectroscopy (FTIR) is a non-destructive analytical technique that uses
infrared radiation to identify and quantify the molecular composition of a sample.
> It provides spectrum of absorption or emission of a solid or liquid or gas.
> In FTIR, data are collected simultaneously over a wide range of wave length. This feature is different from
UV-Visible spectrophotometer where data are collected over narrow range of wave length at a time.

Principle:
> When infrared radiation is absorbed by a sample, it causes the molecular bonds to vibrate.
> Each type of molecular bond absorbs infrared radiation at a specific wavenumber, which is a measure of
the energy of the radiation.
> The absorbed radiation is then converted into a spectrum, which is a plot of the amount of radiation
absorbed versus the wavenumber.
> The spectrum is a unique fingerprint of the sample's molecular composition, allowing for the identification
and quantification of specific molecular bonds and functional groups.

Working:
> A beam of infrared radiation is generated by a source, typically a globar or a ceramic element, and directed
onto a beam splitter, which divides the beam into two paths.
> One path leads to a fixed mirror, while the other path leads to a moving mirror that oscillates at a precise
frequency.
> The two beams are then reflected back to the beam splitter, where they recombine and create an
interference pattern.
> The resulting beam is then directed onto the sample, which absorbs some of the radiation and transmits
the rest.
> The transmitted radiation is then detected by a detector, typically a thermocouple or a mercury cadmium
telluride (MCT) detector, which converts the radiation into an electrical signal.
> The signal is then sent to a computer, which uses a mathematical algorithm called the Fourier transform to
convert the signal into a spectrum.
> The Fourier transform algorithm takes the raw data and converts it into a plot of the amount of radiation
absorbed versus the wavenumber, which is a measure of the energy of the radiation.
> The resulting spectrum is a unique fingerprint of the sample's molecular composition, allowing for the
identification and quantification of specific molecular bonds and functional groups.
> Finally, the spectrum is analysed using specialized software, which compares the spectrum to a library of
known spectra to identify the sample's molecular structure.

Sample Preparation:
> For solid samples, a small amount of the sample is typically mixed with a suitable matrix, such as potassium
bromide (KBr), and compressed into a pellet using a hydraulic press.
> Liquid samples are placed between two salt plates, such as sodium chloride (NaCl), while gas samples are
introduced into a gas cell with infrared-transparent windows.
> Biological samples, such as tissues or cells, may require freezing, drying, or embedding in a suitable matrix.
> Regardless of the preparation method, the sample must be homogeneous, free of contaminants, and
properly aligned in the sample compartment.

Advantages:
> Provides rapid and accurate identification of molecular structures, functional groups, and chemical bonds.
> FTIR is a non-destructive technique, allowing for the analysis of samples without altering their composition
or structure.
> The technique is also highly sensitive and can detect small changes in the molecular structure of a sample.
> The technique is also relatively easy to use and requires minimal sample preparation.
> Can provide both qualitative and quantitative information, allowing for the identification and quantification
of specific molecular components in a sample.

Disadvantages:
> The molecule being tested must be active in the infrared range.
> The material being tested must be transparent in the spectral region of interest.
> For most samples, FTIR provides minimal elemental information.

Applications:
> FTIR is used to analyze the molecular structure and composition of nanoparticles, such as their surface
chemistry, functional groups, and biomolecular interactions.
> FTIR is applied in the study of protein-nanoparticle interactions, which is essential for understanding the
biocompatibility and toxicity of nanoparticles.
> FTIR is also used to monitor the synthesis and assembly of nanostructured biomaterials, such as
nanofibers, nanotubes, and nanogels.
> FTIR is applied in the development of nanoscale biosensors, which are used for the detection of
biomolecules, such as proteins, DNA, and RNA.

Q. X ray Photoelectron Spectroscopy

> X-ray photoelectron spectroscopy (XPS) is used in analyzing the surface chemistry of a material.
> XPS can measure the chemical, electronic state and composition of the materials.
Principle:
> XPS spectra are obtained by irradiating a solid surface with a beam of X-rays while simultaneously
measuring the kinetic energy and electrons that are emitted from the surface of the material within few
nanometre of depth.
> An XPS spectrum is recorded by counting the number of ejected electrons over a range of electron kinetic
energies.
> The peaks in the spectrum are the characteristic energy of that particular element.
> The energies and intensities of the photoelectron peaks are essential for the identification and quantification
of all surface elements.

Working:
> A sample is introduced into the vacuum chamber of the XPS instrument, where it is exposed to a beam of
X-rays, typically generated by a magnesium or aluminium anode.
> The X-rays penetrate the sample surface and interact with the electrons in the atoms, causing them to be
ejected from the sample.
> These ejected electrons are called photoelectrons, and they carry information about the energy levels and
chemical state of the atoms from which they originated.
> The photoelectrons are then collected by a detector and analysed according to their kinetic energy, which
is related to their binding energy, or the energy required to remove them from the atom.
> The XPS instrument is equipped with a hemispherical analyser, which separates the photoelectrons
according to their kinetic energy and measures their intensity.
> The resulting spectrum is a plot of the intensity of the photoelectrons versus their binding energy, providing
a detailed picture of the surface composition and chemical state of the sample.

Sample Preparation:
> The sample is typically cleaned with a solvent, such as acetone or ethanol, to remove any surface debris
or impurities.
> Next, the sample is dried in a vacuum oven or under a stream of nitrogen gas to remove any moisture.
> Finally, the sample is mounted on a sample holder and introduced into the XPS instrument, where it is
pumped down to a high vacuum environment before analysis.

Advantages:
> Able to provide detailed information about the surface composition and chemical state of a material, with a
depth sensitivity of around 5-10 nanometers.
> Highly sensitive and can detect elements present in concentrations as low as 0.1%.
> XPS is a non-destructive technique.
> Relatively fast, with typical analysis times ranging from a few minutes to several hours.
> XPS can provide information about the oxidation state, bonding, and electronic structure of the surface
atoms.

Disadvantages:
> It only provides information about the top 5-10 nanometers of the sample surface.
> XPS requires a high vacuum environment.
> XPS can be sensitive to sample charging, which can affect the accuracy of the results, and requires careful
calibration and sample preparation to minimize.
> XPS is not suitable for analyzing very large or very small samples.
> The interpretation of XPS spectra can be complex and requires expertise and experience.

Applications:
> XPS is used to analyze the surface composition, chemical state, and electronic structure of nanoparticles,
such as metal oxides, quantum dots, and carbon nanotubes.
> XPS is used to study the interactions between proteins and nanoparticles, which is essential for
understanding the biocompatibility and toxicity of nanoparticles.
> Used to analyze the surface chemistry and electronic structure of nanobiosensors, which are used for
detecting biomolecules, such as proteins, DNA, and RNA.
> XPS is used to analyze the surface chemistry and biocompatibility of nanoparticles used in drug delivery
systems.
Q. Plasmonic Material
> Plasmonic materials are a class of materials that exhibit unique optical properties due to their ability to
support surface plasmon modes.
> Plasmonic materials are typically metals such as silver, gold, and platinum, which can be nanostructured
to enhance their optical properties.
> These materials are chosen based on criteria like chemical and thermal stability, bulk plasma frequency,
and nonlinear optical response.
> For instance, gold, silver, and platinum are known for their structural deformation at high temperatures,
while other metals may degrade chemically.
> Plasmonic materials exhibit surface plasmon resonance, a phenomenon where light interacts with free
electrons on the surface of metal nanoparticles, generating an enhanced electromagnetic field.
> This unique property makes plasmonic materials ideal for various applications in nanobiotechnology,
including biosensing, imaging, photothermal therapy, drug delivery, and tissue engineering.
> Plasmonic nanoparticles can be designed to detect biomolecules, such as proteins and DNA, with high
sensitivity and specificity, or to enhance imaging techniques, like optical coherence tomography and surface-
enhanced Raman scattering.
> Plasmonic materials can be used to develop targeted cancer therapies, where the enhanced
electromagnetic field is used to heat and kill cancer cells, or to create novel drug delivery systems, where the
plasmonic nanoparticles release drugs in response to specific stimuli.

Q. Localised Surface Plasmon Resonance


> Localized Surface Plasmon Resonance is a phenomenon that occurs when light interacts with metal
nanoparticles, typically with dimensions smaller than the wavelength of light.
> In Localized Surface Plasmon Resonance the incident light excites the free electrons on the surface of the
nanoparticle, causing them to oscillate collectively at a specific frequency, known as the plasmon resonance
frequency.
> This collective oscillation generates an enhanced electromagnetic field around the nanoparticle, which can
be several orders of magnitude stronger than the incident light.
> The phenomenon is highly dependent on the size, shape, and composition of the nanoparticle, as well as
the surrounding dielectric environment.
> Localized Surface Plasmon Resonance can be tuned to occur at specific wavelengths, making it a highly
versatile tool for various applications, including biosensing, imaging, and photothermal therapy.
> The enhanced electromagnetic field generated by this phenomenon can also be used to enhance the
sensitivity of spectroscopic techniques, such as
1. Surface-enhanced Raman scattering (SERS)
2. Surface-enhanced fluorescence (SEF)
Allowing for the detection of biomolecules at extremely low concentrations.

Q. Nanomagnetism
> Nanomagnetism refers to the study of magnetic properties and behavior of materials at the nanoscale,
typically in the range of 1-100 nanometers.
> Nanomagnetic materials exhibit unique properties such as superparamagnetism, where the magnetic
moment of the nanoparticle can fluctuate randomly due to thermal energy, and magnetic anisotropy, where
the magnetic moment is influenced by the shape and crystal structure of the nanoparticle.
> Nanomagnetism materials can exhibit novel phenomena such as spin-polarized tunneling, giant
magnetoresistance, and magnetic vortex dynamics.
> These properties make nanomagnetic materials promising for various biomedical applications such as
targeted drug delivery and magnetic hyperthermia.
Q. Types of Magnetic Materials
Iron Oxide Nanoparticles

Iron oxide nanoparticles, particularly magnetite (Fe3O4) and maghemite (γ-Fe2O3), are one of the most
widely studied and used nanomagnetic materials. They are biocompatible, non-toxic, and exhibit
superparamagnetic behavior.

Ferrite Nanoparticles

Ferrite nanoparticles are composed of iron oxide and other metals such as nickel, zinc, or manganese. They
exhibit high magnetic moments and are used in applications such as magnetic resonance imaging (MRI) and
hyperthermia.

Cobalt and Nickel Nanoparticles

Cobalt and nickel nanoparticles exhibit high magnetic moments and are used in applications such as
magnetic data storage and spintronics. However, they are toxic and require careful handling.

Rare-Earth Nanoparticles

Rare-earth nanoparticles, such as neodymium (Nd) and dysprosium (Dy), exhibit high magnetic moments
and are used in applications such as permanent magnets and magnetic resonance imaging (MRI).

Core-Shell Nanoparticles

Core-shell nanoparticles consist of a magnetic core surrounded by a non-magnetic shell. They exhibit
improved stability and biocompatibility and are used in applications such as biomedical imaging and targeted
drug delivery.

Doped Magnetic Nanoparticles

Doped magnetic nanoparticles are composed of a magnetic material doped with other elements such as
silver or gold. They exhibit improved magnetic properties and are used in applications such as magnetic
resonance imaging (MRI) and hyperthermia.

Nanostructured Magnetic Materials

Nanostructured magnetic materials, such as nanowires and nanotubes, exhibit unique magnetic properties
and are used in applications such as magnetic data storage and spintronics.

Hybrid Magnetic Nanoparticles

Hybrid magnetic nanoparticles consist of a magnetic material combined with other materials such as
polymers or carbon nanotubes. They exhibit improved stability and biocompatibility and are used in
applications such as biomedical imaging and targeted drug delivery.

Shape-Controlled Magnetic Nanoparticles

Shape-controlled magnetic nanoparticles, such as nanocubes and nanorods, exhibit unique magnetic
properties and are used in applications such as magnetic resonance imaging (MRI) and hyperthermia.

Self-Assembled Magnetic Nanoparticles

Self-assembled magnetic nanoparticles consist of magnetic nanoparticles assembled into larger structures
using techniques such as electrostatic interactions or hydrogen bonding. They exhibit improved stability and
biocompatibility and are used in applications such as biomedical imaging and targeted drug delivery.
Q. Toxicity of Nanomaterials
> The toxicity of nanomaterials is a major concern in the field of nanobiotechnology, as these materials are
being increasingly used in biomedical applications, including drug delivery, imaging, and tissue engineering.
> The unique physical and chemical properties of nanomaterials, such as their small size, high surface area,
and reactivity, can lead to unintended interactions with biological systems, potentially causing harm to
humans and the environment.
> Nanomaterials can penetrate cell membranes, accumulate in organs, and trigger inflammatory responses,
oxidative stress, and DNA damage.
> The toxicity of nanomaterials can be influenced by factors such as their composition, size, shape, surface
charge, and functionalization.
> In Nanobiotechnology, the toxicity of nanomaterials can be particularly problematic, as these materials are
often designed to interact with biological systems in specific ways, which can increase the risk of adverse
effects.
> Therefore, it is essential to carefully evaluate the toxicity of nanomaterials used in nanobiotechnology
through rigorous testing and characterization, using techniques such as in vitro and in vivo assays, to ensure
their safe use in biomedical applications and to minimize potential risks to human health and the environment.
> Nanomaterials can generate reactive oxygen species (ROS) that can damage cellular components, leading
to oxidative stress.
> Nanomaterials can trigger an inflammatory response, leading to the activation of immune cells and the
release of pro-inflammatory cytokines.
> Nanomaterials can cause DNA damage, leading to genetic mutations and potentially cancer.
> Nanomaterials can be toxic to cells, leading to cell death and tissue damage.
> Some nanoparticles can cross the blood-brain barrier and cause neurotoxicity.
> Some nanomaterials can cause skin irritation and allergic reactions.
> Examples of toxic nanomaterials include:
- Carbon nanotubes
- Titanium dioxide nanoparticles
- Zinc oxide nanoparticles
- Silver nanoparticles
- Copper oxide nanoparticles
- Nickel nanoparticles

Q. Factors Responsible for the Nanomaterial Toxicity

> Physical factors include the size and shape of the nanomaterial, with smaller sizes and irregular shapes
often exhibiting higher toxicity due to their increased surface area and reactivity.
> Chemical factors, such as the composition and surface chemistry of the nanomaterial, also play a crucial
role in determining toxicity.
> For instance, nanomaterials with high levels of reactive oxygen species (ROS) or other toxic chemicals can
cause oxidative stress and cellular damage.
> The surface modification and functionalization of nanomaterials can significantly impact their toxicity.
> Biological factors, including the route of exposure, dose, and duration of exposure, as well as the specific
cell type or tissue being targeted, also influence the toxicity of nanomaterials.
> the interactions between nanomaterials and biological systems, such as protein binding, cellular uptake,
and inflammation, can also contribute to toxicity.
> Other factors, such as environmental conditions, including pH, temperature, and humidity, can also impact
the toxicity of nanomaterials.

Q. Routes of Exposure to Engineered nanoparticles


Respiratory Route
> Inhalation is a common route of exposure in workplaces where airborne nanoparticles are present. These particles
travel through the upper and lower respiratory tract to the lungs.
> Once deposited in the lungs, nanoparticles can cross the alveolar epithelium into the bloodstream and
Translocate to other organs, including the brain, through systemic circulation or via the olfactory nerves.
> Exposure can lead to inflammation, oxidative stress, and chronic respiratory diseases like asthma or bronchitis.

Dermal Route
> Skin exposure occurs when nanoparticles penetrate through intact skin, wounds, or abrasions.
> Although the intact stratum corneum provides a barrier, damaged or sensitive skin allows nanoparticles to reach
deeper layers.
> Some nanoparticles, such as quantum dots and TiO₂, have been shown to induce inflammatory responses and
oxidative stress in skin cells.

Gastrointestinal Route
> Ingestion can occur unintentionally (e.g., hand-to-mouth transfer during handling) or intentionally (e.g.,
nanoparticles in food, water, or drugs).
> Nanoparticles can be absorbed through the gastrointestinal tract via the Peyer’s patches or epithelial cells,
entering the lymphatic or circulatory systems.
> Toxicity studies show effects on the liver, spleen, and gastrointestinal mucosa, leading to inflammation or immune
responses.

Ocular Route
> Entry occurs through the eyes during exposure to nanoparticle-containing aerosols or solutions.
> Some nanoparticles are designed for drug delivery to treat eye conditions, showing the potential for penetration
into deeper ocular tissues.
> Limited data, but concerns include irritation or long-term effects on the visual system.

Intravenous and Parenteral Routes


> Direct introduction of nanoparticles into the bloodstream via intravenous, intraperitoneal, or intradermal
injections, commonly used in medical research.
> Nanoparticles distribute rapidly to various organs like the liver, kidneys, and spleen.
> Potential for oxidative stress and immune activation in target and non-target tissues.

Mucosal and Olfactory Routes


> Exposure to mucosal tissues (e.g., nose, mouth, and throat) can result in direct absorption into systemic circulation
or neural tissues.
> Nanoparticles deposited in the nasal cavity can translocate to the brain via the olfactory nerve.
> Effects include Neurological impacts and localized inflammation.

( Source :
https://www.researchgate.net/publication/223988185_Nanoparticles_Toxicity_and_Their_Routes_of_Exposures )

Q. Mechanisms of Nanoparticle Toxicity


> Upon exposure, nanoparticles can interact with cells through various pathways, including endocytosis,
phagocytosis, and diffusion, leading to their internalization and potential toxicity.
> Once inside, nanoparticles can cause oxidative stress by generating reactive oxygen species (ROS), which
can damage cellular components, including DNA, proteins, and lipids.
> Nanoparticles can also trigger inflammation by activating immune cells, such as macrophages and dendritic
cells, leading to the release of pro-inflammatory cytokines and chemokines.
> Nanoparticles can disrupt cellular homeostasis by altering the activity of enzymes, receptors, and ion
channels, leading to changes in cellular signaling pathways and potentially causing cell death.
> Nanoparticles can also cause physical damage to cells and tissues through mechanisms such as
mechanical stress, abrasion, and penetration, leading to tissue injury and fibrosis.
> The toxicity of nanoparticles can also be influenced by their physical and chemical properties, such as size,
shape, surface charge, and composition, which can affect their interactions with cells and tissues.
Q. Toxicology of Nanocarriers

> Nanocarriers, such as liposomes, polymeric nanoparticles, and dendrimers, are designed to deliver
therapeutic agents, including drugs, genes, and vaccines, to specific sites within the body.
> However, their small size, large surface area, and ability to interact with biological systems can also lead
to unintended toxic effects.
> The toxicology of nanocarriers can be influenced by various factors, including their composition, size,
shape, surface charge, and functionalization.
> For example, cationic nanocarriers can interact with negatively charged cell membranes, leading to
changes in membrane permeability and potentially causing cell death.
> Nanocarriers can also cause oxidative stress, inflammation, and DNA damage, leading to genotoxicity and
potentially carcinogenic effects.
> the biodistribution and pharmacokinetics of nanocarriers can also impact their toxicity, as they can
accumulate in specific organs, such as the liver, spleen, and kidneys, leading to organ-specific toxicity.
> Understanding the toxicology of nanocarriers is crucial for the development of safe and effective
nanomedicines.

You might also like