Archive for April 2013
DNA Computing
DNA COMPUTING
Introduction:
DNA or
Deoxyribo nucleic acid, represents information as a pattern of molecules on a
strand of DNA. Each strand represents
one possible answer. The machine's input, output
and software program are all DNA molecules.
The DNA computing is the
today’s real cutting edge technology.
Scientists are incorporating actual human genetic material into
microprocessors and using DNA in test tubes to solve sophisticated mathematical
problems.
Today’s
technology that takes DNA out of the test tube and puts it on a solid surface,
to the development of larger DNA computers capable of tackling the kinds of
complex problems that conventional computers now handle routinely.
Scientists
have taken DNA computing from the free-floating world of the test tube and
anchored it securely to a surface of glass and gold. In so doing, they have
taken a small but important step forward in the quest to harness the vast
potential of DNA to perform the same tasks that now require silicon and
miniature electronic circuits.
DNA technology is a nascent technology
that seeks to capitalize on the enormous informational capacity of DNA,
biological molecules that can store huge amounts of information and are able to
perform operations similar to a computer’s through the deployment of enzymes,
biological catalysts that act like software to execute desired operations.
In the Wisconsin University
experiments, a set of DNA molecules were applied to a small glass plate overlaid
with gold. By exposing the molecules to
certain enzymes, the molecules with the wrong answers were weeded out, leaving
only the DNA molecules with the right answers.
The Structure of DNA:
The ladderlike double-helix
structure of DNA was discovered in 1953 by James Watson and Francis Crick. The
rungs of the "ladder" contain combinations of four bases (adenine,
thymine, cytosine, and guanine) held together by hydrogen bonds. These base
pairs are arranged along a sugar-phosphate backbone (the sides of the
"ladder").
What
is DNA Computing :
DNA computing is a new discipline involving
cross-disciplinary research between molecular biology and computer science and
involves both practical and theoretical work.
The theoretical research is mainly concerned with developing formal
models for biological phenomena, whilst the practical research involves the
realization of the theoretical work in the laboratory.
Instead of
retaining information as ones and zeros and using mathematical formula to solve
a problem, DNA computing uses data represented by a pattern of molecules
arranged on a strand of DNA.
Specific enzymes act like software to read, copy, and manipulate
the code in predictable ways.
The
area was initiated in 1994 by an article written by L.M.Adleman on ”Molecular
Computation of Solutions to Combinational Problems”. In this article Adleman show that it is
possible to solve a particular computational problem using standard techniques
from molecular biology. Since Adlemans
original experiment, researchers have developed several different models to
solve other mathematical and computational problems using molecular techniques.
Why
DNA Computing :
There
are two reasons for using molecular biology to solve computational problems.
1.
The
information density of DNA is much greater than that of silicon: 1 bit can be
stored in approximately one cubic nanometer.
Other storage media, such as videotapes can store 1 bit in
1,000,000,000,000 cubic nanometer.
2.
Operations
on DNA are massively parallel : a test tube of DNA can contain trillions of
strands. Each operation on a test tube
of DNA is carried out on all strands in the tube in parallel.
Explanation of
Molecular Computing with DNA :
Today DNA computing has become
one of the growth fields in the computational sciences.
The first toy problems solved by DNA computations were
Hamiltonian path problems, often called traveling-salesman problems. The
objective is to find the optimal path by which to visit a fixed number of
cities once each. The problem can be
solved with pencil and paper if only a small number of cities are involved, but
it explodes into a non-deterministic time problem (NP) when large number of
cities are considered. On conventional computers, NP problems quickly become
intractable because of the large number of possible paths that must be tested
and compared.
But DNA computers can use their massive
parallelism to find the optimal route among a large number of cities without
trying out every possible combination one at a time. Instead, massive numbers of short DNA
sequences representing each city are mixed together in solution. Each end of
each city sequence is sticky, so that they become stuck together in long
sequences representing every possible order in which cities could be visited.
Every possible route through the cities is generated at one
time, usually in less than an hour in a test tube. The next task is to filter out the DNA
sequences that start and end with the city of origin. Then the sequences with the correct number of
stops — one per city — are filtered out. Finally, the sequences that visit each city
only once are filtered out, yielding a set of optimal solutions.
Adleman performed agarose gel electrophoresis,
ligation reactions and polymerase chain reactions to carry out those steps with
real DNA sequences. He derived an
optimal solution to a seven-city traveling-salesman problem in approximately
one week. Unfortunately, you can solve
the same problem on a piece of paper in about an hour — or by a digital
computer in a few seconds.
But when the number of cities is increased to
just 70, the problem becomes intractable for even a 1,000-Mips supercomputer.
By contrast, the 70-city problem is a theoretical breeze for DNA computing,
because while a single DNA molecule performs at only .001 Mips, a test tube
full can perform at about 1 quadrillion Mips.
Adleman pointed out that the
medium was remarkable for the following reasons.
Ø Speed: The
above computation popped along at 10^14
operations/s; 100x faster than a fast supercomputer.
operations/s; 100x faster than a fast supercomputer.
Ø Energy Efficiency: Adleman
figured his computer was
running at 2x10^19 operations per joule. This represents 16x
more energy than the floor set by the second law of
thermodynamics. Computers built by humans waste about a billion
times more energy per operation.
running at 2x10^19 operations per joule. This represents 16x
more energy than the floor set by the second law of
thermodynamics. Computers built by humans waste about a billion
times more energy per operation.
Ø Memory: DNA
stores memory at a density of about 1 bit
per cubic nanometer. This is about a trillion times more
efficient than that of videotape.
per cubic nanometer. This is about a trillion times more
efficient than that of videotape.
DNA
Factoids :
Ø
The
Length of DNA molecule, when extended, is
1.5 meters. If stretched out all of the
DNA in our cells, it would reach to the moon—6,000 times.
Ø
DNA
is the basic medium of storage for all living
cells. It has contained and transmitted
the data of life for billions of years.
It is the prototype of human made computers.
Ø Roughly 10 trillion DNA molecules could
fit into a space the size of a marble. Since
all these molecules can process data simultaneously, you could theoretically
have 10 trillion calculations going on in that small space at once. That's more
than the fastest existing supercomputer can handle (currently about 1 trillion
per second).
Advantages
:
A key advantage of DNA is its
microscopic size. “In a test tube
smaller than the joint on your finger, you can put billions and billions of DNA
strands”.
DNA stores a
massive amount of data in a small space. Its effective density is roughly
100,000 times greater than modern hard disks. And while a desktop PC
concentrates on doing one task at a time very quickly, billions of DNA
molecules in a jar will attack the same problem billions of times over.
The appeal of DNA computing lies in the fact
that DNA molecules can store far more information than any existing
conventional computer chip. It has been estimated that a gram of dried DNA can
hold as much information as a trillion CDs. Moreover, in a biochemical reaction
taking place in a tiny surface area, hundreds of trillions of DNA molecules can
operate in concert, creating a parallel processing system that mimics the
ability of the most powerful supercomputer.
The chips that
drive conventional computers represent information as a series of electrical
impulses using ones and zeros. Mathematical formulas are used to manipulate
that binary code to arrive at an answer. DNA computing, on the other hand,
depends on information represented as a pattern of molecules arranged on a
strand of DNA. Certain enzymes are capable of reading that code, copying and
manipulate.
But
why use DNA or RNA to solve problems when we already have fast, silicon-based
microprocessors? DNA processors use
cheap, clean, and readily available biomaterials (rather than the costly, and
often toxic materials that go into traditional microprocessors). DNA also stores more information in less
space, and because it computes via biochemical reactions (of which many can
take place simultaneously), DNA can handle massive parallel processing. In an era where the end of Moore’s Law is in sight, computer
scientists are looking for a way to take processors beyond the speed and size
limits of silicon microcircuitry. DNA
computing is one way to do this thing it in predictable ways.
More than 25 years ago, when
Intel was developing the first microprocessor, company cofounder Gordon Moore
predicted that the number of transistors on a microprocessor would double
approximately every 18 months. To date, Moore 's
law has proven remarkably accurate.
An End to Disease :
Most
of the research on DNA processors is being done by biotech companies hoping to
cash in on recent breakthroughs in decoding the human genome. Scientists at
these companies have created microprocessor chips that contain fragments of DNA
in place of the usual electrical circuitry. These chips, which contain an array
of specific genetic information that corresponds to the data on a human gene,
are known as microarrays. Once they are
fed into a special, PC-like machine, scientists can compare the chip to real human
DNA to see how human DNA changes when it becomes cancerous or is afflicted with
a virus. Eventually, when scientists
have a more thorough understanding of which parts of the human genome control
specific functions, they will be able to use microarrays to determine an
individual's susceptibility to certain diseases or resistance to particular
drugs. (Biotech companies are patenting
their microarrays, and plan to sell them to doctors and scientists.)
Disadvantages:
One of the practical difficulties that
arise in implementing a DNA computer is controlling the error rate at each
computational step. Unlike their logical
counter parts, biological operations(bio-ops) produce incorrect results from
time-to-time.
The error rates typically range from
10^-5 to 0.05.
Theoretical
& Practical Computing :
Starting
from Observing the structure and dynamics of DNA of theoretical research began
to propose formal models(This means models with rules for performing
theoretical operations) from DNA computers.
Once a model has been created it is important see what kind of problems
can be solved using it.
The practical side of DNA computing has progressed at a
much slower rule, due mainly to the fact that the laboratory work is very time
consuming and error prone. However the
practical research is now beginning to pickup speed.
DNA computing is an interdisciplinary field where
biologists, computer scientists, physics, mathematics, chemists, etc. find a
lot of interesting problems which can be applied to both theoretical and
practical areas of DNA computing.
If any one want to begin to work on DNA computing should
have a basic idea of what they want to do. i.e., in practical or theoretical
side? If they prefer the practical one,
then they must be more oriented to Chemistry, biochemistry, Computer Science
etc. If they prefer the theoretical side then they must be oriented to Computer
Science, Mathematics etc.
CONCLUSION
:
The
development of Biotechnology can definitely lead to the development of DNA computing. Today DNA computing is one of the nascent
technologies. The DNA computing can
replace the existing conventional microprocessors because of its cheap, clean,
and readily available biomaterials.
Kerberos
KERBEROS
INTRODUCTION
Imagine a scenario where are many users logged on to their
workstations and accessing services available on different application servers
on a network. Users are generally
allowed access to the Internet, and this network can be accessed from any point
on the Internet as well.
Such a network is exposed to
the hazards of cyberspace as well as the probability of mischief from within, a
fact often overlooked while designing network security schemes.
Anyone can enter your network
and remaining in listening mode to get sensitive information such as
passwords. When an attempt is made to
change the root of the data by posing to be either the client or the server, it
is termed as Impersonation. Firewalls
might prevent outside dangers, but what if the troublemaker is right in the
next cubicle?
Authentication seems to be
crux of the whole matter. In other
words, client and server should validate each other so that attempt to
impersonate can be eliminated. This is
where Kerberos, a network authentication protocoling, comes in.
Kerberos
is designed to authentication services in heterogeneous, distributed networked
environments. Named after threaded dog
mentioned in Greek mythology as the guard of the temple of Hades ,
Kerberos is today an Internet standard for authentication, and is defined by
RFC 1510.
Let's assume a user enters a
password, which is then sent to the server from which some service requested. The
user could be using his e-mail client to contact the mail server, running a
Telnet application from his workstation or trying to avail of some other
service.
The application server
maintains a database of passwords (generally in encrypted form). With
client-serves computing being the norm, presence of multiple servers in even a
small organization is common. The
traditional authentication method would require the password database to be on
every sever. One of the greatest
security risks in this is the travelling of password from client to server over
the networked in unencrypted form.
Maintenance and security headaches too increase if the password database
is spread all over the network.
WHAT IS KERBEROS?
Kerberos is a network authentication protocol. It is designed to provide strong
authentication for client/server application by using secret-key
cryptography. A free implementation of
this protocol is available from the Massachusetts Institute of
Technology. Kerberos is available in
many commercial products as well.
The Internet is an insecure place.
Many of the protocols used in the Internet do not provide any
security. Tools to "sniff"
passwords off of the network are in common use by systems crackers. Thus, applications, which send an unencrypted
password over the network, are extremely vulnerable. Worse yet, other client/server applications
rely on the client program to be "honest" about the identity of the
user who is using it. Other applications
rely on the client to restrict its activities to those, which it is allowed to
do, with no other enforcement by the server.
Some sites attempt to use
firewalls to solve their network security problems. Unfortunately, firewalls assume that
"the bad guys" are on the outside, which is often a very bad
assumption. Insiders carry out most of
the really damaging incidents of computer crime. Firewalls also have a significant
disadvantage in that they restrict how your users can use the Internet. (After all, firewalls are simply a less extreme
example of the dictum that there is nothing more secure that there is nothing
more secures then a computer which is not connected to the network-- and
powered off!). In many places, these
restrictions are simply unrealistic and unacceptable.
Kerberos was created by MIT
as a solution to these network security problems. The Kerberos protocol used strong
cryptography so that a client can prove its identity to a server (and vice
versa) across an insecure network connection.
After a client and server have used Kerberos to prove their identity,
they can also encrypt as they go about their business.
Kerberos is freely available
from MIT under a copyright permission notice very similar to the one used for
the BSD operating and X11 Windowing system.
MIT provides Kerberos in source form, so that anyone who wishes to it
may look over the code is trustworthy. In addition, for those who prefer to
rely on a professional supported product, Kerberos is available as a product
from many different vendors.
THE KERBEROS SCHEME:
Kerberos build around three
things: Authentication Server (AS), Ticket Granting Server (TGS) and
Encryption. Authentication Server is the
central entity in the whole scheme--the central authentication facility for the
whole network. Kerberos uses a series of
encrypted messages for verifying the authenticity of the client and the
server. Of course, no information
pertaining to user validity should be transmitted over the network in
unencrypted form. In addition, there
should be a two-way authentication, with the client also having some way to
find out that it is communicating with the intended server.
The client and serer should
both share some sort of identification with the authentication server. They can communicate only when the AS conveys
to them that the person at other end is indeed the guy you want to talk to.
The client has knowledge of
the encryption key that is shared by the authentication server only. The encryption key here is derived from the
user password. In the same way all application
servers share an encryption key with these servers, Kerberos users the Data
Encryption Standard (DES).
Tickets are nothing but certificates issued by the
authentication server upon receiving the client request and subsequently
verifying that the request from a valid user, the AS encrypts this certificate,
rendering it readable by the valid server only.
The concept of a ticket gave
rise to amore practical concept called Ticket Granting Ticket (TGT) so that the
user can use the services of multiple servers in the network without repeating
the initial steps involved in authentication process.
WORKING OF KERBROSE:
When a user types a user name, it is sent to the
authentication server, which replies with the session key and Ticket Granting
Ticket both encrypted with user's key.
This session key is for communicating with the Ticket Granting Server
(TGS). Now the client program requires
the password from the user and derives the user's key from it. If the TGT and TGS key are decrypted, it
implies that the password is correct.
When the user wants to request
the serves of an application server this request is send to TGS. Key element of the request is TGT along with
network address if the workstation and other data encrypted with the TGS key
and if it is found to be valid, TGS issues a ticket, which contains user name,
address, service name, life span, time stamp, and the all important session
key. For the communication to ensure
between the client and the application server they should share the same key.
The TGS generates two copies
of this session key, one encrypted with TGS key for the client and the other
with application server key. Using the
TGS key, the client decrypts the session key meant for it, and the session key
for the applications server is routed to the destination.
After the server has
successfully decrypted the session key, it knows that this particular client is
trying to contact him. To ensure the
client's identity, it sends a random number in the plain text, which the client
encrypts with the session key they both share and returns it back to server, on
seeing the message, the can make out that this is the real McCoy-because nobody
else can encrypt with same session key.
In an internetworked
enterprise, where individual networks (or group of networks) may be separated
from each other, each AS can cater to a subset of the total number of
application servers present in the enterprise.
The subset of users and application server is called a realm. Any user registered with an Authentication
Server can request for authentication form another authentication server to
access the services of any application server registered with it.
SECURITY THROUGH KERBROSE:
Kerberos quells any attempt to sniff the
authentication information or impersonation the client the client server. Sniffing out passwords is not possible as
they are always in encrypted form. Nor is the other deadly ploy-impersonation
by network intruders-permitted.
Impersonating the
authentication server itself is futile as it shares the encryption keys of the
client and all application server without which even client request routed to
the invalid matching will not decrypted, impersonating the application server
by trying to capture the session ticket is useless because tickets can only be
decrypted with the server key shard by Application server and the
authentication server.
Manipulation data, if any, is
also checked by Kerberos as it includes a feature of ensuring data integrity by
providing checksum, encrypted using a secret key. Hence, any changes to data are easily
detected.
IMPLEMENTING KERBROS:
To make this authentication
scheme work, it has to be integrated with other parts of the system. Apart from setting up of an authentication
server, some utility programs must be installed on the workstations. Some of these are Kinit (to obtain Kerberos
credentils), Kdestroy (destroy credentials), Kist (list credentials and Kpasswd
(for changing Kerberos passwords).
Windows 2000 uses this as the
native authentication method, and all Windows 2000 systems support the protocol
as a client. While logging into Windows
2000 one may not be aware the Kerberos login tool Kinit (or the functionality
provided by it) is integrated in to it.
In Windows 2000, Authentication Server functionality is provided by KDC
(Kev Distribution Center ). Kerberos is available on a number of
platforms, including many different flavors of Unix, DOS, Mac and Windows NT.
LIMITATIONS OF KERBROS:
It is accepted fact that any
system related to network security cannot be foolproof, and has some seek point
or to other waiting to be exploited by a determined network intruder. In Kerberos, the loose brick in the quite
secured fortress was becomes apparent when a password is entered for the client
program, which would then encrypt it to be sent to the authentication
server. It is during this very brief
period (a few milliseconds) that the password exists in its original
unencrypted form.
If a user enters password to
a client program and the path between the user and the intruder (using a Trojan
horse, for instance) has modified initial authentication program, an attacker
may obtain enough information to impersonate the user. These limitations can to some extent be
addressed by additional techniques such as one-time passcodes.
CONCLUSION:
At last, Kerberos
is a solution to your network security problems. It provides the tools of authentication and
strong cryptography over the network to help you secure your information
systems across your entire enterprise. We hope you find Kerberos as useful as
it has been to us. At MIT, Kerberos has
been invaluable to our Information/Technology architecture.
Posted by Unknown
Enteprise Java Bean
ENTERPRISE
JAVA BEANS
Enterprise
JavaBeans (EJB) is a server-side component architecture that simplifies the
process of building enterprise-class distributed component applications in
Java. By using EJB, you can write scalable, reliable, and secure applications
without writing your own complex distributed component framework. EJB is about
rapid application development for the server side; you can quickly and easily
construct server-side components in Java by leveraging a prewritten distributed
infrastructure provided by the industry. EJB is designed to support application
portability and reusability across any vendor’s enterprise middleware services.
If you
are new to enterprise computing, these concepts will be made very clear
shortly. EJB is a complicated subject and thus deserves a thorough explanation.
In this chapter, we’ll introduce EJB by answering the following questions:
1)
What plumbing
do you need to build a robust distributed object deployment?
2)
What is EJB,
and what value does it add?
3)
Who are the players
in the EJB Ecosystem?
Let’s
kick things off with a brainstorming session.
THE MOTIVATION FOR EJB
Figure
1.X shows a typical business application.
This application could exist in any vertical industry, and could solve
any business problem. Here are some
examples:
·
A stock
trading system
·
A banking
application
·
A customer
call center
·
A procurement
system
·
An insurance
risk analysis application.
·
Remote method
invocations.
We need logic that connects a
client and server together via a network connection. This includes dispatching
method requests, brokering of parameters, and more.
·
Load-balancing. Clients
must be fairly routed to servers. If a
server is overloaded, a different server should be chosen.
·
Transparent
fail-over. If
a server crashes, or if the network crashes, can clients be re-routed to other
servers without interruption of service?
If so, how fast does fail-over happen?
Seconds? Minutes? What is acceptable for your business problem?
·
Back-end
integration.
Code needs to be written to persist
business data into databases, as well as integrate with legacy systems that may
already exist.
·
Transactions. What
if 2 clients access the same row of the database simultaneously? Or what if the database crashes? Transactions protect you from these issues.
·
Clustering. What
if the server contains state when it crashes?
Is that state replicated across all servers, so that can clients use a
different server?
·
Dynamic
redeployment. How
do you perform software upgrades while the site is running? Do you need to take a machine down, or can
you keep it running?
COMPONENT ARCHITECTURES
It
has been a number of years now since the idea of multi-tier server-side
deployments surfaced. Since then, well
over fifty (50) application servers have appeared on the market. But unfortunately, there was no definition of
what a middle-tier component really is.
Because of this, each application server has been providing component
services in a non-uniform, proprietary way.
This means that once you bet on an application server, your code is
locked into that vendor’s solution. This
greatly reduces portability and is an especially tough pill to swallow in the
Java world, which promotes portability.
It also hampers the commerce of components because a customer cannot
combine a component written to one application server with another component
written to a different application server.
INTRODUCING ENTERPRISE
JAVABEANS
The Enterprise JavaBeans (EJB) standard
is component architecture for deployable server-side components in Java. It is an agreement between components and
application servers that enable any component to run in any application
server. EJB components (also called enterprise
beans) are deployable, and can be imported and loaded into an application
server, which hosts those components.
The top
three values of EJB are:
·
It is agreed
upon by the industry: Those
who use EJB will benefit from its widespread use. Since everyone will be on the same page, in
the future it will be easier to hire employees who understand your systems
(since they may have prior EJB experience), learn best practices to improve
your system (by reading books like this one), partner with businesses (since
technology will be compatible), and sell software (since customers will accept
your solution).
·
Portability is
easier: Since EJB is a standard; you do
not need to gamble on a single vendor.
Although portability will never be free, it is cheaper than without a
standard.
·
Rapid Application Development: Your application
can be constructed faster since you get middleware from the application
server. There’s also less of a mess to
maintain.
WHY JAVA
EJB
components must be written in Java only, and require an organizational
dedication to Java. This is indeed a
serious restriction. The good news,
however, is that Java is an ideal language to build components, for many
reasons:
Interface/implementation
separation.
We need a clean
interface/implementation separation to ship components. After all, customers who purchase components
shouldn’t be messing with implementation.
Upgrades and support will become horrendous. Java supports this at a syntactic level via
the interface keyword and class keyword.
Interpreted. Since
Java is interpreted, it is safe. In
Java, if a thread dies, the application stays up. Pointers are no longer an issue. Memory leaks occur much less often. This safety is extremely important for
mission-critical applications. Sure,
this might make your application slower, but 90% of all business problems are
glorified Graphical User Interfaces (GUIs) to databases. That database is going to be your #1
bottleneck, not Java.
Cross-platform. Java
runs on any platform. Since EJB is an
application of Java, this means EJB should also easily run on any
platform. This is valuable for customers
who have invested in a variety of powerful hardware, such as Win32, UNIX, and
mainframes. They do not want to throw
these investments away.
EJB AS A BUSINESS SOLUTION
EJB
is specifically used to help solve business problems. EJB components (enterprise beans) might
perform any of the following tasks:
* Perform business
logic
* Access a database
* Access another system
* Thick clients
* Dynamically generated web
pages.
THE BEAN PROVIDER
The bean provider
supplies business components, or enterprise beans. Enterprise beans are not complete
applications, but rather are deployable components that can be assembled into
complete solutions. The bean provider
could be an ISV selling components, or an internal department providing
components to other departments.
THE EJB DEPLOYER
After
the application assembler builds the application, the application must then be deployed (and go live) in a running
operational environment. Some challenges
faced here include:
·
Securing the
deployment with a firewall.
·
Choosing the
right hardware to provide the needed level of scalability.
·
Providing
redundant hardware for fault-tolerance.
·
Integrating
with an LDAP server for security lists, such as Lotus Notes or Microsoft
Exchange Server.
Performance-tuning the
system.
BRIEF
INTRODUCTION
Enterprise JavaBeans is a specification for creating
server-side scalable, transactional, multi-user secure enterprise-level
applications. It provides a consistent component architecture framework for
creating distributed n-tier middleware. It would be fair to call a bean written
to EJB spec a Server Bean.
A
typical EJB Architecture consists of
An EJB server,
EJB containers that runs on these servers,
EJBs that run in these containers,
EJB clients and
other auxiliary systems like
In a typical development and deployment scenario, there will be
an EJB server provider who creates and sells an EJB server along with EJB
containers that will run on these servers. Then there will be the EJB
providers-people responsible for developing the EJBs and the Application assemblers-people
that use pre-built EJBs to build their applications.
EJB Servers:
These are analogous to the CORBA ORB. This provides
the system services like a raw execution environment, multiprocessing, load
balancing, device access, provides naming and transaction services and makes
containers visible.
EJB Containers:
These act as the interface between an Enterprise
Java Bean and the outside world. An EJB client never accesses a bean directly.
Any bean access is done through container-generated methods which in turn
invoke the bean's methods. The two types of containers are session containers
that may contain transient, non-persistent EJBs whose states are not saved at
all and entity containers that contain persistent EJBs whose states are saved
between invocations.
EJB Clients:
These make use of the EJB Beans for their
operations. They find the EJB container that contains the bean through the Java
Naming and Directory (JNDI) interface. They then make use of the EJB Container
to invoke EJB Bean methods.
Enterprise Java Beans:
There
are two types of EJBs. They are
Session Beans and
Entity Beans
Session Beans and
Entity Beans
Session Beans:
Each Session Bean is usually associated with one EJB
Client. Each Session Bean is created and destroyed by the particular EJB Client
that it is associated with. A Session Bean can either have states or they can
be stateless. However, Session Beans do not survive a System shutdown.
Entity Beans:
Entity Beans always have states. Each Entity Bean
may however be shared by multiple EJB Clients. Their states can be persisted
and stored across multiple invocations. Hence they can survive System
Shutdowns.
EJB servers have a right to manage their working
set. Passivation is the process
by which the state of a Bean is saved to persistent storage and then is swapped
out. Activation is the process
by which the state of a Bean is restored by swapping it in from persistent
storage. Passivation and Activation apply to both Session and Entity Beans.
There
are two types of Session Beans. They are
Stateless Session Beans and
Stateful Session Beans
Stateless Session Beans and
Stateful Session Beans
Stateless Session Beans:
Service multiple clients (remember MTS components?)
These types of EJBs have no internal state. Since they do not have any states,
they need not be passivated. Because of the fact that they are stateless, they
can be pooled in to
StatefulSessionBeans:
These types of EJBs possess internal states. Hence they need to handle Activation and Passivation. However, there can be only one Stateful Session Bean per EJB Client. Since they can be persisted, they are also called Persistent Session Beans. These types of EJBs can be saved and restored across client sessions. To save, a call to the bean's getHandle() method returns a handle object. To restore, call the handle object's getEJBObject() method.
These types of EJBs possess internal states. Hence they need to handle Activation and Passivation. However, there can be only one Stateful Session Bean per EJB Client. Since they can be persisted, they are also called Persistent Session Beans. These types of EJBs can be saved and restored across client sessions. To save, a call to the bean's getHandle() method returns a handle object. To restore, call the handle object's getEJBObject() method.
Persistence
in Entity Beans is of two types. They are:
Container-managed persistence
Bean-managed persistence
Container-managed persistence
Bean-managed persistence
Container-managed persistence:
Here, the EJB container is responsible for saving
the Bean's state. Since it is container-managed, the implementation is
independent of the data source. The container-managed fields need to be
specified in the Deployment Descriptor and the persistence is automatically
handled by the container.
Bean-managed persistence:
Here, the Entity Bean is directly responsible for
saving its own state. The container does not need to generate any database
calls. Hence the implementation is less adaptable than the previous one as the
persistence needs to be hard-coded into the bean.
EJBs are deployed as serialized instances (*.ser
files). The manifest file is used to list the EJBs. In addition to this, a Deployment Descriptor has to be supplied along
with each .ser file. It contains a serialized instance of an EntityDescriptor
or a SessionDescriptor.
The Steps involved in developing and deploying an Entity Bean is
·
Set up your Data Source to the Database.
·
Define your Home Interface.
·
Define your Remote Interface.
·
Develop your EntityBean
·
Define a Primary Key class
INTERFACE
EJBHOME
The EJBHome interface is extended by all
enterprise Bean's home interfaces. An enterprise Bean's home interface defines
the methods that allow a client to create, find, and remove EJB objects.
Each enterprise Bean has a home interface. The home
interface must extend the javax.ejb.EJBHome interface, and define the
enterprise Bean type specific create and finder methods (session Beans do not
have finders).
The home interface is defined by the enterprise Bean
provider and implemented by the enterprise Bean container.
Method Summary |
|
EJBMetaData |
getEJBMetaData () Obtain the EJBMetaData interface for the enterprise Bean. |
HomeHandle |
getHomeHandle () Obtain a handle for the home object. |
void |
remove ( Handle handle ) Remove an EJB object identified by its handle. |
void |
remove ( java.lang.Object primaryKey ) Remove an EJB object identified by its primary key. |
INTERFACE
EJBMETADATA
The EJBMetaData interface allows a client to
obtain the enterprise Bean's meta-data information.
The meta-data is intended for development tools used
for building applications that use deployed enterprise Beans, and for clients
using a scripting language to access the enterprise Bean.
Note that the EJBMetaData is not a remote interface.
The class that implements this interface (this class is typically generated by
container tools) must be serializable, and must be a valid RMI/IDL value type.
Method Summary
|
|
EJBHome |
getEJBHome () Obtain the home interface of the enterprise Bean. |
java.lang.Class |
getHomeInterfaceClass () Obtain the Class object for the enterprise Bean's home interface. |
java.lang.Class |
getPrimaryKeyClass () Obtain the Class object for the enterprise Bean's primary key class. |
java.lang.Class |
getRemoteInterfaceClass () Obtain the Class object for the enterprise Bean's remote interface. |
boolean |
isSession () Test if the enterprise Bean's type is "session". |
boolean |
isStatelessSession () Test if the enterprise Bean's type is "stateless session". |
INTERFACE
EJBOBJECT
The EJBObject interface is extended by all
enterprise Bean's remote interface. An enterprise Bean's remote interface
provides the client's view of an EJB object. An enterprise Bean's remote
interface defines the business methods callable by a client.
Each enterprise Bean has a remote interface. The
remote interface must extend the javax.ejb.EJBObject interface, and define the
enterprise Bean specific business methods.
The enterprise Bean's remote interface is defined by
the enterprise Bean provider and implemented by the enterprise Bean container.
Method Summary
|
|
EJBHome |
getEJBHome () Obtain the enterprise Bean's home interface. |
Handle |
getHandle () Obtain a handle for the EJB object. |
java.lang.Object |
getPrimaryKey () Obtain the primary key of the EJB object. |
boolean |
isIdentical (EJBObject obj) Test if a given EJB object is identical to the invoked EJB object. |
void |
remove () Remove the EJB object. |
INTERFACE
ENTERPRISEBEAN
The EnterpriseBean interface must be
implemented by every enterprise Bean class. It is a common superinterface for
the SessionBean and EntityBean interfaces.
INTERFACE
ENTITYBEAN
The EntityBean interface is implemented by every
entity enterprise Bean class. The container uses the EntityBean methods to
notify the enterprise Bean instances of the instance's life cycle events.
Fields inherited from class java.io.Serializable
|
serialVersionUID |
Method Summary
|
|
void |
ejbActivate () A container invokes this method when the instance is taken out of the pool of available instances to become associated with a specific EJB object. |
void |
ejbLoad () A container invokes this method to instruct the instance to synchronize its state by loading it state from the underlying database. |
void |
ejbPassivate () A container invokes this method on an instance before the instance becomes disassociated with a specific EJB object. |
void |
ejbRemove () A container invokes this method before it removes the EJB object that is currently associated with the instance. |
void |
ejbStore () A container invokes this method to instruct the instance to synchronize its state by storing it to the underlying database. |
void |
setEntityContext (EntityContext ctx) Set the associated entity context. |
void |
unsetEntityContext () Unset the associated entity context. |
INTERFACE
HOMEHANDLE
The HomeHandle interface is implemented by all
home object handles. A handle is an abstraction of a network reference to a
home object. A handle is intended to be used as a "robust" persistent
reference to a home object.
Fields inherited from class java.io.Serializable
|
serialVersionUID |
Method Summary
|
|
EJBHome |
getEJBHome () Obtain the home object represented by this handle. |
INTERFACE
SESSIONBEAN
The SessionBean interface is implemented by
every session enterprise Bean class. The container uses the SessionBean methods
to notify the enterprise Bean instances of the instance's life cycle events.
Fields inherited from class java.io.Serializable
|
serialVersionUID |
Method Summary
|
|
void |
ejbActivate () The activate method is called when the instance is activated from its "passive" state. |
void |
ejbPassivate () The passivate method is called before the instance enters the "passive" state. |
void |
ejbRemove () A container invokes this method before it ends the life of the session object. |
void |
setSessionContext (SessionContext ctx) Set the associated session context. |
SESSION BEAN DIAGRAMS
HOW TO DEVELOP AN EJB
COMPONENT
When
building an EJB component, the following is a typical order of operations:
1.
Write the
.java files that compose your bean: the component interfaces, the home
interfaces, the enterprise bean class file, and any helper classes you might
need.
2.
Write the
deployment descriptor.
3.
Compile the
.java files from step 1 into .class files.
4.
Using the jar
utility, create an ejb-jar file containing the deployment descriptor and
.class files.
5.
Deploy the
ejb-jar file into your container in a vendor-specific manner, perhaps by
running a vendor-specific tool or perhaps by coping your ejb-jar file into a
folder where your container looks to load ejb-jar files.
6.
Configure your
EJB server so that it is properly configured to host your ejb-jar file. You might tune things such as database
connections, thread pools, and so-on.
This step is vendor-specific and might be done through a web-based
console, or editing a configuration file.
7.
Start your EJB
container, and confirm that it has loaded your ejb-jar file.
8.
Optionally,
write a standalone test client .java file.
Compile that test client into a .class file. Run the test client from the command line and
have it exercise your bean's APIs.
Posted by Unknown