blogger templates blogger widgets
This is part of a list of blog posts.
To browse the contents go to

those three-letter-acronyms - JTS, JTA, JCA, EJB, CDI


Ah those 3-letter-acronyms

My inspiration and drive behind this serious of posts/notes is a comment that I read on this site by Tim Jansen

I believe the reason why people are using Tomcat rather than a complete JEE server is that they DON’T KNOW all those three-letter-acronym frameworks (JPA, JMA, JTA, EJB, JSF, CDI). JEE is huge, and there is hardly any publically available documentation for it.

Basically you have the choice between complete overload (API Docs and several thousand pages of specifications) on the one hand, and documents that hardly scratch the surface (Sun’s JEE tutorial and JBoss’ Getting Started Guide) on the other hand.

Compared to this, Tomcat is attractive because it is relatively small. Servlets, JSPs and JDBC is all that you need to know. And, even better, there haven’t been any major changes in the last 10 years.

Spring adds many of the capabilities that you will miss in plain Tomcat. And even though Spring is relatively large now, there is only one framework, and a single, free, easy-to-understand but still comprehensive ‘Reference Documentation’ for it. I think pretty much everyone who ever used Spring learned it from that document.

While I would totally agree that JEE5 and 6 are great frameworks, they are still too hard to learn. There is not enough free documentation to show people how to work with it. And the API documentation and specs are literally *contaminated* with old backward-compatibility crud from the J2EE age that you don’t need to know to start a new application, but still makes up 80% of the documentation and confuses the hell out of people.


Most of what I'm writing here are notes taken from other books or sites and that's kind of plagiarism. But God help us.

Must of us are forced to learn these specs and frameworks and we have no option but to sniff around and learn stuff instead of spending hundreds of dollars on some corporation's training.

And I share because there are many people out there like me.

So where do I start?

Let's go back to 1997 when java developers moved from Java Applets to thin clients. The basic idea behind the thin clients was as simple as it was radical: move the entire business logic to the server and limit the client’s responsibilities to presentation logic and user interaction.

And that's what we have today. Since all the business logic is moved to the server, the server now needs to take care of a lot of things - Persistence and CRUD operations using JDBC, Business logic using Servlets, Generating presentation using JSPs, Managing 3rd party (service) communications using standard JDK, etc.

The most important among these was the CRUD operations using JDBC.


JDBC - DriverManager

First, we had web applications written that used the JDBC DriverManager to get the connection for each request.


Need for pooling

Creating connections every time a situation demanded caused much overhead on the server. So then came the concept of connection pools.


Note that most databases maintains a connection object pool (within the DB) which is irrelevant in our discussion. Also note that your web server has no role to control over this.

This practice of creating and maintaining connection pool within the application was tedious and error prone.

JDBC - DataSource

Then came JDBC 2.0 and with it came DataSource interface.
When using data sources, the developers had the opportunity to decide whether to use a library (like c3p0 or dbcp that supports data sources) for connection pooling or to leave the whole thing to server.

Using libraries,


Using servers,


Along the same time came JNDI and eventually, the best approach that came into practice was the data source being configured and managed by the server/container and your application just requests connection objects from this data source.

There are code samples and explanations for all the above approaches here.

Transactions

But then came businesses and thus transactions. (Well transactions were there much before that.)

A transaction is a collection of read/write operations succeeding only if all contained operations succeed.

Inherently a transaction is characterized by four properties (commonly referred as ACID) :

  • Atomicity
  • Consistency
  • Isolation
  • Durability

Atomicity
Atomicity takes individual operations and turns them into an all-or-nothing unit of work, succeeding if and only if all contained operations succeed.

Durability
A successful transaction must permanently change the state of a system, and before ending it, the state changes are recorded in a persisted transaction log.

Isolation
Transactions cannot interfere with each other (as an end result of their executions)

Consistency
Every transaction must leave the database in a consistent (correct) state.
If one operation triggers secondary actions (CASCADE, TRIGGERS), those must also succeed otherwise the transaction fails.
If the system is composed of multiple nodes, then consistency mandates that all changes be propagated to all nodes (multi-master replication). If slaves nodes are updated asynchronously then we break the consistency rule, the system becoming “eventually consistent“.

From a database perspective, the atomicity is a fixed property, but everything else may be traded-off for performance/scalability reasons.

But let’s see how a simple web application can handle transactions.

Simple, Humbler path

Here we manually switch off auto commit on the connection (or it can be made permanently false by setting the correct attributes on the datasource) then based on the business logic we commit or rollback the operations.
code:

someDAOMethod() {
Connection conn = getConnectionFromDatasource();
conn.setAutoCommit(false);
try {
do some insert/update/delete/select based on your business logic
do some more…
if (something doesn’t seem right) {
conn.rollback();
return;
}
do some more…
conn.commit(); //hopefully all is well
} catch(Exception e){
conn.rollback();
} finally {
conn.close();
}
}

For simple applications that are sure to not grow in size and popularity this is fine.

But in cases when your business model is huge with tons of logic in there - this approach will make it extremely error prone.

Let’s say your web-based application has complex business requirements.
The implementation uses a logical three tier architecture with a web layer, a business-object layer, and a data access layer, all running in the same JVM. In such a layered architecture, transaction management belongs in the business object layer, and is typically demarcated at business facades.
The data access layer will participate in transactions but should not drive them, enabling separate data access methods to be used in the one transaction.

How do we propagate transactions across multiple data access methods?

A naïve approach might be to fetch a JDBC Connection at the beginning of the business object method, switch it to manual commit mode, pass it to various
data access methods, and trigger a commit or rollback in the business object method before it returns.
This has the major drawback of coupling both the business objects and the data access objects to the specific transaction and data access strategy (in this case JDBC).

insideBusinessFunction() {
Connection conn = getConnectionFromDatasource();
conn.setAutoCommit(false);
try {
do some business logic
call daoMethod_1(conn, parameters…)
if (something doesn’t seem right) {
conn.rollback();
return;
}
call daoMethod_2(conn, parameters…)
call daoMethod_3(conn, parameters…)
conn.commit(); //hopefully all is well
} catch(Exception e){
conn.rollback();
} finally {
conn.close();
}
}

To achieve proper abstract transaction management, we need coordinating infrastructure.
That’s were JTA comes into picture.


Enter JTA - Java Transaction API

The Java Transaction API (JTA) allows applications to perform distributed transactions, that is, transactions that access and update data on two or more networked computer resources.
The JTA specifies standard Java interfaces between a transaction manager and the parties involved in a distributed transaction system:
the application,
the application server, and
the resource manager (eg. DB driver) that controls access to the shared resources affected by the transactions.

XAResource
The JTA interface between transaction manager and the resource manager is called the javax.transaction.xa.XAResource interface is a Java mapping of the industry standard XA interface based on the X/Open CAE Specification (Distributed Transaction Processing: The XA Specification).


As a developer, you can choose between using programmatic transaction demarcation in the EJB code (bean-managed) or declarative demarcation (container-managed). Regardless of whether an enterprise bean uses bean-managed or container-managed transaction demarcation, the burden of implementing transaction management is on the J2EE/EJB container.

From the outside world you can interact with JTA either through your code (programmatically) or through the J2EE server.

Bean managed (Programmatic transaction demarcation)

There are two types of bean-managed transactions:
  • JDBC type
  • JTA type

The javax.transaction.UserTransaction interface provides the application the ability to control transaction boundaries programmatically. The javax.transaction.UserTransaction method starts a global transaction and associates the transaction with the calling thread.


Container managed (Declarative transaction demarcation)

In this case, Transaction management is often accomplished using EJBs with CMT (Container-Managed Transactions).

The advantage of CMT is that it keeps transaction demarcation out of the Java code by moving it into the EJB deployment descriptor. Thus, transaction demarcation becomes a cross-cutting aspect that does not need to be hard-coded into application objects. Transactionality is associated with methods on the EJB’s component interface, a neat and usually appropriate harnessing of the programming language.

The javax.transaction.TransactionManager interface allows the application server to control transaction boundaries on behalf of the application being managed.


We will look at both in detail but before that let's see JTA's main advantage - the distributed transaction support works.


How transaction manager supports distributed transactions


The first step of the distributed transaction process is for the application to send a request for the transaction to the transaction manager. Although the final commit/rollback decision treats the transaction as a single logical unit, there can be many transaction branches involved. A transaction branch is associated with a request to each resource manager involved in the distributed transaction. Requests to three different RDBMSs, therefore, require three transaction branches. Each transaction branch must be committed or rolled back by the local resource manager. The transaction manager controls the boundaries of the transaction and is responsible for the final decision as to whether or not the total transaction should commit or rollback. This decision is made in two phases, called the Two-Phase Commit Protocol.

In the first phase, the transaction manager polls all of the resource managers (RDBMSs) involved in the distributed transaction to see if each one is ready to commit. If a resource manager cannot commit, it responds negatively and rolls back its particular part of the transaction so that data is not altered.

In the second phase, the transaction manager determines if any of the resource managers have responded negatively, and, if so, rolls back the whole transaction. If there are no negative responses, the translation manager commits the whole transaction, and returns the results to the application.





Bean managed (Programmatic transaction demarcation)


There are two types of bean-managed transactions:

  • JDBC type—You delimit JDBC transactions with the commit and rollback methods of the connection interface.
  • JTA type—You invoke the begin, commit, and rollback methods of the UserTransaction interface to demarcate JTA transactions.

We have already seen how JDBC transactions work and why that’s not a good design (Never mix business objects and the data access objects).

So now let’s see how to programmatically do JTA.

The javax.transaction.UserTransaction interface provides the application the ability to control transaction boundaries programmatically. The javax.transaction.UserTransaction method starts a global transaction and associates the transaction with the calling thread.

You must first implement an Xid class for identifying transactions (this would normally be done by the transaction manager). The Xid contains three elements: formatID, gtrid (global transaction ID), and bqual (branch qualifier ID).

The formatID is usually zero, meaning that you are using the OSI CCR (Open Systems Interconnection Commitment, Concurrency, and Recovery standard) for naming. If you are using another format, the formatID should be greater than zero A value of -1 means that the Xid is null.

The gtrid and bqual can each contain up to 64 bytes of binary code to identify the global transaction and the branch transaction, respectively. The only requirement is that the gtrid and bqual taken together must be globally unique. Again, this can be achieved by using the naming rules specified in the OSI CCR.

I tested this with IBM DB2.

package com.test.jta;

import javax.transaction.xa.Xid;

public class MyXid implements Xid {
protected int formatId;
protected byte gtrid[];
protected byte bqual[];

public MyXid() {
}

public MyXid(int formatId, byte gtrid[], byte bqual[]) {
this.formatId = formatId;
this.gtrid = gtrid;
this.bqual = bqual;
}

public int getFormatId() {
return formatId;
}

public byte[] getBranchQualifier() {
return bqual;
}

public byte[] getGlobalTransactionId() {
return gtrid;
}
}

package com.test.main;

import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;

import javax.sql.XAConnection;
import javax.transaction.xa.XAException;
import javax.transaction.xa.XAResource;
import javax.transaction.xa.Xid;

import com.ibm.db2.jcc.DB2XADataSource;
import com.test.jta.MyXid;

public class TestMain {

public static void main(String[] args) {
DB2XADataSource xaDS;
XAConnection xaCon = null;
XAResource xaRes;
Xid xid;
Connection con = null;
Statement stmt;
int ret;
try {
xaDS = new com.ibm.db2.jcc.DB2XADataSource();
xaDS.setDataSourceName("db2ds");
xaDS.setDatabaseName("wcsdb");
xaCon = xaDS.getXAConnection("db2admin", "db2admin");
xaRes = xaCon.getXAResource();

con = xaCon.getConnection();
stmt = con.createStatement();

xid = new MyXid(100, new byte[] { 0x01 }, new byte[] { 0x02 });
System.out.println("transaction starting");
xaRes.start(xid, XAResource.TMNOFLAGS);
stmt.executeUpdate("insert into atest(bcol) values ('hello boy')");
xaRes.end(xid, XAResource.TMSUCCESS);
System.out.println("transaction ending");
ret = xaRes.prepare(xid);
if (ret == XAResource.XA_OK) {
xaRes.commit(xid, false);
}else {
xaRes.rollback(xid);
}
System.out.println("transaction committed");
} catch (XAException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
} finally {
try {
con.close();
xaCon.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

}
}
}

What happens if we mix local transactions in between these XA transactions?

It shouldn't cause any problem provided we are suspending the xa transaction which allows the same connection to do a local operation.
xaRes.start(xid, XAResource.TMNOFLAGS);
stmt.executeUpdate(...);
xaRes.end(xid, XAResource.TMSUSPEND);

// This update is done outside of transaction scope, so it
// is not affected by the XA rollback.
stmt.executeUpdate(...);

xaRes.start(xid, XAResource.TMRESUME);

What happens if one XA resource is shared among 2 transactions?

Two transaction branches are created, but they do not belong to the same distributed transaction. JTA allows the XA resource to do a two-phase commit on the first branch even though the resource is still associated with the second branch.

xid1 = new MyXid(100, new byte[]{0x01}, new byte[]{0x02});
xid2 = new MyXid(100, new byte[]{0x11}, new byte[]{0x22});

xaRes.start(xid1, XAResource.TMNOFLAGS);
stmt.executeUpdate(...);
xaRes.end(xid1, XAResource.TMSUCCESS);

xaRes.start(xid2, XAResource.TMNOFLAGS);

// Should allow XA resource to do two-phase commit on
// transaction 1 while associated to transaction 2
ret = xaRes.prepare(xid1);
if (ret == XAResource.XA_OK) {
xaRes.commit(xid1, false);
}

stmt.executeUpdate(...);
xaRes.end(xid2, XAResource.TMSUCCESS);

ret = xaRes.prepare(xid2);
if (ret == XAResource.XA_OK) {
xaRes.rollback(xid2);
}

How transaction branches on different connections can be joined as a single branch if they are connected to the same resource manager?

Yes. This feature improves distributed transaction efficiency because it reduces the number of two-phase commit processes. Two XA connections to the same database server are created. Each connection creates its own XA resource, regular JDBC connection, and statement. Before the second XA resource starts a transaction branch, it checks to see if it uses the same resource manager as the first XA resource uses. If this is case, as in this example, it joins the first branch created on the first XA connection instead of creating a new branch. Later, the transaction branch can be prepared and committed using either XA resource.

xaCon1 = xaDS.getXAConnection("jdbc_user", "jdbc_password");
xaRes1 = xaCon1.getXAResource();
con1 = xaCon1.getConnection();
stmt1 = con1.createStatement();

xid1 = new MyXid(100, new byte[]{0x01}, new byte[]{0x02});
xaRes1.start(xid1, XAResource.TMNOFLAGS);
stmt1.executeUpdate("insert into test_table1 values (100)");
xaRes1.end(xid, XAResource.TMSUCCESS);

xaCon2 = xaDS.getXAConnection("jdbc_user", "jdbc_password");
xaRes2 = xaCon2.getXAResource();
con2 = xaCon2.getConnection();
stmt2 = con2.createStatement();

if (xaRes2.isSameRM(xaRes1)) {
xaRes2.start(xid1, XAResource.TMJOIN);
stmt2.executeUpdate("insert into test_table2 values (100)");
xaRes2.end(xid1, XAResource.TMSUCCESS);
}
else {
xid2 = new MyXid(100, new byte[]{0x01}, new byte[]{0x03});
xaRes2.start(xid2, XAResource.TMNOFLAGS);
stmt2.executeUpdate("insert into test_table2 values (100)");
xaRes2.end(xid2, XAResource.TMSUCCESS);
ret = xaRes2.prepare(xid2);
if (ret == XAResource.XA_OK) {
xaRes2.commit(xid2, false);
}
}

ret = xaRes1.prepare(xid1);
if (ret == XAResource.XA_OK) {
xaRes1.commit(xid1, false);
}


Partially programmatic

There is another way to do all this by configuring the XA datasource on your J2EE application server and then use JNDI to look it up and get xa connections from it. (Remember you select non-managed or non-CMB when creating the datasource).


Container managed (Declarative transaction demarcation)

In this case, Transaction management is often accomplished using EJBs with CMT (Container-Managed Transactions).

The advantage of CMT is that it keeps transaction demarcation out of the Java code by moving it into the EJB deployment descriptor. Thus, transaction demarcation becomes a cross-cutting aspect that does not need to be hard-coded into application objects. Transactionality is associated with methods on the EJB’s component interface, a neat and usually appropriate harnessing of the programming language.

Another advance of EJB CMT is that it allows us to use a lot of container-specific features can be addressed only via the EJB deployment descriptor, like custom isolation levels as an important setting for JDBC transactions.

Another advantage is support for remote transaction propagation.

So far, we have considered only transactions within a single application server JVM.

The EJB component model supports this notion of transaction propagation: propagation
across calls to remote servers. However, such interoperability is not required by the J2EE 1.3 specification.
The original intention of the EJB 1.x transaction model was to allow for using a Java Transaction
Service—that is, the Java mapping of the OMG Object Transaction Service specification for CORBA—underneath.
Using EJB architecture JTS can now propagate transaction contexts across remote calls to other servers.

Disadvantages:

In EJB 1.0 and 1.1, EJBs offered only remote interfaces, meaning that declarative transaction management
was an option only if an object was made distributed, or at least pseudo-distributed.
Since the release of EJB 2.0 in 2001, CMT has required less overhead, as it became possible to give EJBs only local interfaces.

Note that you do not need EJB’s remote transaction propagation to execute a single remote method invocation that runs within its own transaction, probably the commonest requirement in distributed applications.
In such a scenario, a new transaction context will be started when the call arrives at the remote
server, and ended when the call returns. This can entirely be handled by the service implementation on the server, and does not have to affect the client or the remoting protocol in the first place. A lightweight remoting solution like RMI-JRMP or Hessian/Burlap can serve perfectly well here.

The main value of remote EJBs lies in the ability to propagate existing transaction contexts from one JVM to another, which is often not needed at all. (Keeping transaction contexts open over multiple remote calls, as when making several invocations to a Stateful Session Bean in the same transaction, is inherently dangerous. For example, if there’s a network failure, the transaction may be left open, unnecessarily consuming resources and locking out other calls.)

Before we get into EJB let's see the different transaction attributes allowed by your container.


Transaction attributes

A transaction attribute is a parameter that controls the scope of a transaction.

Because transaction attributes are stored in the deployment descriptor, they can be changed during several phases of J2EE application development: at EJB creation, at assembly (packaging), or at deployment. However, as an EJB developer, it is your responsibility to specify the attributes when creating the EJB.

You can specify the transaction attributes for the entire enterprise bean or for individual methods. If you've specified one attribute for a method and another for the bean, the attribute for the method takes precedence.

A transaction attribute may have one of the following values:

  • Required
  • RequiresNew
  • Mandatory
  • NotSupported
  • Supports
  • Never

Required
If the client is running within a transaction and invokes the enterprise bean's method, the method executes within the client's transaction. If the client is not associated with a transaction, the container starts a new transaction before running the method.

RequiresNew
If the client is running within a transaction and invokes the EJB's method, the container takes the following steps:
1. Suspends the client's transaction.
2. Starts a new transaction.
3. Delegates the call to the method.
4. Resumes the client's transaction after the method completes.
If the client is not associated with a transaction, the container starts a new transaction before running the method.

Mandatory
If the client is running within a transaction and invokes the EJB's method, the method executes within the client's transaction. If the client is not associated with a transaction, the container throws a TransactionRequiredException.

NotSupported
If the client is running within a transaction and invokes the EJB's method, the container suspends the client's transaction before invoking the method. After the method has completed, the container resumes the client's transaction.
If the client is not associated with a transaction, the container does not start a new transaction before running the method.

Supports
If the client is running within a transaction and invokes the EJB's method, the method executes within the client's transaction. If the client is not associated with a transaction, the container does not start a new transaction before running the method.

Never
If the client is running within a transaction and invokes the enterprise bean's method, the container throws a RemoteException. If the client is not associated with a transaction, the container does not start a new transaction before running the method.
Use the NotSupported attribute for methods that don't need transactions. Because transactions involve overhead, this attribute may improve performance.

The following table summarizes the effects of the transaction attributes. Transactions can be T1, T2, or None. (Both T1 and T2 transactions are controlled by the container.)

T1 transaction—Is associated with the client that calls a method in the enterprise bean. In most cases, the client is another enterprise bean.
T2 transaction—Is started by the container, just before the method executes.

Table 6-1  Transaction Attributes and Scope 

Transaction Attribute

Client's Transaction

Business Method's Transaction

Required

None

T2

T1

T1

RequiresNew

None

T2

T1

T2

Mandatory

None

Error

T1

T1

NotSupported

None

None

T1

None

Supports

None

None

T1

T1

Never

None

None

Ti

Error



Rolling Back a Container-Managed Transaction

There are two ways to roll back a container-managed transaction:

First, if a system exception is thrown, the container automatically rolls back the transaction.
Second, by invoking the setRollbackOnly method of the EJBContext interface, the bean method instructs the container to roll back the transaction. If the bean throws an application exception, the rollback is not automatic, but may be initiated by a call to setRollbackOnly.


How ISOLATION is handled for overlapping transactions?

Usually concurrency control is achieved through locking. But as we all know, locking increases the serializable portion of the executed code, affecting parallelization.

The SQL standard defines four Isolation levels:

READ_UNCOMMITTED
READ_COMMITTED
REPEATABLE_READ
SERIALIZABLE

All but the SERIALIZABLE level are subject to data anomalies (phenomena) that might occur according to the following pattern:

Dirty read

A dirty read happens when a transaction is allowed to read uncommitted changes of some other running transaction. This happens because there is no locking preventing it. In the picture above, you can see that the second transaction uses an inconsistent value as of the first transaction had rollbacked.


Non-repeatable read

A non-repeatable read manifests when consecutive reads yield different results due to a concurring transaction that has just updated the record we’re reading. This is undesirable since we end up using stale data. This is prevented by holding a shared lock (read lock) on the read record for the whole duration of the current transaction.


Phantom read

A phantom read happens when a second transaction inserts a row that matches a previously select criteria of the first transaction. We therefore end up using stale data, which might affect our business operation. This is prevented using range locks or predicate locking.


Even if the SOL standard mandates the use of the SERIALIZABLE isolation level, most database management system use a different default level.

Defaults for Oracle, Postgres, MSSQL is READ_COMMITED, MySQL is REPEATABLE_READ and DB2 is CURSOR STABILITY.

Note that individual database vendors along with their j2ee server products provide various different methodologies.


Table 16.1. Transaction Isolation Levels

Transaction Level


Dirty Read


Phantom Read


Nonrepeatable


Restriction


Performance


TRANSACTION_NONE


N/A


N/A


N/A


Lowest


Fastest


TRANSACTION_UNCOMMITED


Yes


Yes


Yes


Low


Faster


TRANSACTION_READ_COMMITED


No


Yes


Yes


High


Fast


TRANSACTION_REPEATABLE_READ


No


Yes


No


Higher


Medium


TRANSACTION_SERIALIZABLE


No


No


No


Highest


Slow



Locking mechanisms - Optimistic and pessimistic locking

Optimistic and pessimistic locking (or concurrency control) are ways of addressing a problem such as the following:
1. User A reads the row for customer #123
2. User B reads the row for customer #123
3. User B updates the row for customer #123
4. User A updates the row for customer #123 and overwrites User B’s changes
The problem here is that one user has changes that conflict with another user’s, and unless we do something about it, we’ll lose User B’s changes without even noticing.
The pessimistic concurrency control approach is to have the database server lock the row on User A’s behalf, so User B has to wait until User A has finished its work before proceeded. We effectively solve the problem by not allowing two users to work on the same piece of data at the same time. It just prevents the conflict.
The optimistic concurrency control approach doesn’t actually lock anything – instead, it asks User A to remember what the row looked like when he first saw it, and when it’s time to update it, the user asks the database to go ahead only if the row still looks like he remembers it. It doesn’t prevent a possible conflict, but it can detect it before any damage is done and fail safely.

Behind the scenes on the physical connection and database these could be implemented in various ways.

The major methods, which have each many variants, and in some cases may overlap or be combined, are:

Locking (e.g., Two-phase locking - 2PL)

Controlling access to data by locks assigned to the data. Access of a transaction to a data item (database object) locked by another transaction may be blocked (depending on lock type and access operation type) until lock release.

Serialization graph checking (also called Serializability, or Conflict, or Precedence graph checking)

Checking for cycles in the schedule's graph and breaking them by aborts.
Timestamp ordering (TO) - Assigning timestamps to transactions, and controlling or checking access to data by timestamp order.

Commitment ordering (or Commit ordering; CO)

Controlling or checking transactions' chronological order of commit events to be compatible with their respective precedence order.

Other major concurrency control types that are utilized in conjunction with the methods above include:

Multiversion concurrency control (MVCC)

Increasing concurrency and performance by generating a new version of a database object each time the object is written, and allowing transactions' read operations of several last relevant versions (of each object) depending on scheduling method. eg: Shadow paging

Index concurrency control

Synchronizing access operations to indexes, rather than to user data. Specialized methods provide substantial performance gains.

Private workspace model (Deferred update)

Each transaction maintains a private workspace for its accessed data, and its changed data become visible outside the transaction only upon its commit


Databases role in this connection and concurrency management

Wait a sec. We java developers need to be concerned with all this right? that's why we have JTA, JPA and stuff.

Well, usually we need not get our hands dirty.

Each setting that you provide for the JTA, eg: the isolation levels. get's reflected when connections are received from the database.

URL:
https://iggyfernandez.files.wordpress.com/2010/01/nocoug-200511-no-free-lunch-read-consistency.pdf




No comments:

Post a Comment