Skip to main content

· 3 min read

Author: FUNKYE (Chen Jianbin), Principal Engineer at a certain Internet company in Hangzhou.

Preface

  1. Let's start by examining the package structure. Under seata-dubbo and seata-dubbo-alibaba, there is a common class named TransactionPropagationFilter, corresponding to Apache Dubbo and Alibaba Dubbo respectively.

20200101203229

Source Code Analysis

package io.seata.integration.dubbo;

import io.seata.core.context.RootContext;
import org.apache.dubbo.common.Constants;
import org.apache.dubbo.common.extension.Activate;
import org.apache.dubbo.rpc.Filter;
import org.apache.dubbo.rpc.Invocation;
import org.apache.dubbo.rpc.Invoker;
import org.apache.dubbo.rpc.Result;
import org.apache.dubbo.rpc.RpcContext;
import org.apache.dubbo.rpc.RpcException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Activate(group = {Constants.PROVIDER, Constants.CONSUMER}, order = 100)
public class TransactionPropagationFilter implements Filter {

private static final Logger LOGGER = LoggerFactory.getLogger(TransactionPropagationFilter.class);

@Override
public Result invoke(Invoker<?> invoker, Invocation invocation) throws RpcException {
// get local XID
String xid = RootContext.getXID();
String xidInterceptorType = RootContext.getXIDInterceptorType();
// get XID from dubbo param
String rpcXid = getRpcXid();
String rpcXidInterceptorType = RpcContext.getContext().getAttachment(RootContext.KEY_XID_INTERCEPTOR_TYPE);
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("xid in RootContext[{}] xid in RpcContext[{}]", xid, rpcXid);
}
boolean bind = false;
if (xid != null) {
//transfer xid
RpcContext.getContext().setAttachment(RootContext.KEY_XID, xid);
RpcContext.getContext().setAttachment(RootContext.KEY_XID_INTERCEPTOR_TYPE, xidInterceptorType);
} else {
if (rpcXid != null) {
//bind XID
RootContext.bind(rpcXid);
RootContext.bindInterceptorType(rpcXidInterceptorType);
bind = true;
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("bind[{}] interceptorType[{}] to RootContext", rpcXid, rpcXidInterceptorType);
}
}
}
try {
return invoker.invoke(invocation);
} finally {
if (bind) {
//remove xid which has finished
String unbindInterceptorType = RootContext.unbindInterceptorType();
String unbindXid = RootContext.unbind();
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("unbind[{}] interceptorType[{}] from RootContext", unbindXid, unbindInterceptorType);
}
// if unbind xid is not current rpc xid
if (!rpcXid.equalsIgnoreCase(unbindXid)) {
LOGGER.warn("xid in change during RPC from {} to {}, xidInterceptorType from {} to {} ", rpcXid, unbindXid, rpcXidInterceptorType, unbindInterceptorType);
if (unbindXid != null) {
// bind xid
RootContext.bind(unbindXid);
RootContext.bindInterceptorType(unbindInterceptorType);
LOGGER.warn("bind [{}] interceptorType[{}] back to RootContext", unbindXid, unbindInterceptorType);
}
}
}
}
}

/**
* get rpc xid
* @return
*/
private String getRpcXid() {
String rpcXid = RpcContext.getContext().getAttachment(RootContext.KEY_XID);
if (rpcXid == null) {
rpcXid = RpcContext.getContext().getAttachment(RootContext.KEY_XID.toLowerCase());
}
return rpcXid;
}

}
  1. Based on the source code, we can deduce the corresponding logic processing.

20200101213336

Key Points

  1. Dubbo @Activate Annotation:
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface Activate {


String[] group() default {};


String[] value() default {};


String[] before() default {};


String[] after() default {};


int order() default 0;
}

It can be analyzed that the @Activate annotation on Seata's Dubbo filter, with parameters @Activate(group = {Constants.PROVIDER, Constants.CONSUMER}, order = 100), indicates that both the Dubbo service provider and consumer will trigger this filter. Therefore, our Seata initiator will initiate an XID transmission. The above flowchart and code have clearly represented this.

  1. Dubbo implicit parameter passing can be achieved through setAttachment and getAttachment on RpcContext for implicit parameter transmission between service consumers and providers.

Fetching: RpcContext.getContext().getAttachment(RootContext.KEY_XID);

Passing: RpcContext.getContext().setAttachment(RootContext.KEY_XID, xid);

Conclusion

For further source code reading, please visit the Seata official website

· 8 min read

一. Introduction

In the analysis of the Spring module, it is noted that Seata's Spring module handles beans involved in distributed transactions. Upon project startup, when the GlobalTransactionalScanner detects references to TCC services (i.e., TCC transaction participants), it dynamically proxies them by weaving in the implementation class of MethodInterceptor under the TCC mode. The initiator of the TCC transaction still uses the @GlobalTransactional annotation to initiate it, and a generic implementation class of MethodInterceptor is woven in.

The implementation class of MethodInterceptor under the TCC mode is referred to as TccActionInterceptor (in the Spring module). This class invokes ActionInterceptorHandler (in the TCC module) to handle the transaction process under the TCC mode.

The primary functions of TCC dynamic proxy are: generating the TCC runtime context, propagating business parameters, and registering branch transaction records.

二. Introduction to TCC Mode

In the Two-Phase Commit (2PC) protocol, the transaction manager coordinates resource management in two phases. The resource manager provides three operations: the prepare operation in the first phase, and the commit operation and rollback operation in the second phase.

public interface TccAction {

@TwoPhaseBusinessAction(name = "tccActionForTest" , commitMethod = "commit", rollbackMethod = "rollback")
public boolean prepare(BusinessActionContext actionContext,
@BusinessActionContextParameter(paramName = "a") int a,
@BusinessActionContextParameter(paramName = "b", index = 0) List b,
@BusinessActionContextParameter(isParamInProperty = true) TccParam tccParam);

public boolean commit(BusinessActionContext actionContext);

public boolean rollback(BusinessActionContext actionContext);
}

This is a participant instance in TCC. Participants need to implement three methods, where the first parameter must be BusinessActionContext, and the return type of the methods is fixed. These methods are exposed as microservices to be invoked by the transaction manager.

  • prepare: Checks and reserves resources. For example, deducting the account balance and increasing the same frozen balance.
  • commit: Uses the reserved resources to complete the actual business operation. For example, reducing the frozen balance to complete the fund deduction business.
  • cancel: Releases the reserved resources. For example, adding back the frozen balance to the account balance.

The BusinessActionContext encapsulates the context environment of the current transaction: xid, branchId, actionName, and parameters annotated with @BusinessActionContextParam.

There are several points to note in participant business:

  1. Ensure business idempotence, supporting duplicate submission and rollback of the same transaction.
  2. Prevent hanging, i.e., the rollback of the second phase occurs before the try phase.
  3. Relax consistency protocols, eventually consistent, so it is read-after-write.

Three. Remoting package analysis

Remoting Package Analysis

All classes in the package serve DefaultRemotingParser. Dubbo, LocalTCC, and SofaRpc are responsible for parsing classes under their respective RPC protocols.

Main methods of DefaultRemotingParser:

  1. Determine if the bean is a remoting bean, code:
    @Override
public boolean isRemoting(Object bean, String beanName) throws FrameworkException {
//判断是否是服务调用方或者是否是服务提供方
return isReference(bean, beanName) || isService(bean, beanName);
}
  1. Remote bean parsing, parses rpc classes into RemotingDesc.

Code:

@Override
public boolean isRemoting(Object bean, String beanName) throws FrameworkException {
//判断是否是服务调用方或者是否是服务提供方
return isReference(bean, beanName) || isService(bean, beanName);
}

Utilize allRemotingParsers to parse remote beans. allRemotingParsers is dynamically loaded in initRemotingParser() by calling EnhancedServiceLoader.loadAll(RemotingParser.class), which implements the SPI loading mechanism for loading subclasses of RemotingParser.

For extension purposes, such as implementing a parser for feign remote calls, simply write the relevant implementation classes of RemotingParser in the SPI configuration. This approach offers great extensibility.

RemotingDesc contains specific information about remote beans required for the transaction process, such as targetBean, interfaceClass, interfaceClassName, protocol, isReference, and so on.

  1. TCC Resource Registration
public RemotingDesc parserRemotingServiceInfo(Object bean, String beanName) {
RemotingDesc remotingBeanDesc = getServiceDesc(bean, beanName);
if (remotingBeanDesc == null) {
return null;
}
remotingServiceMap.put(beanName, remotingBeanDesc);

Class<?> interfaceClass = remotingBeanDesc.getInterfaceClass();
Method[] methods = interfaceClass.getMethods();
if (isService(bean, beanName)) {
try {
//service bean, registry resource
Object targetBean = remotingBeanDesc.getTargetBean();
for (Method m : methods) {
TwoPhaseBusinessAction twoPhaseBusinessAction = m.getAnnotation(TwoPhaseBusinessAction.class);
if (twoPhaseBusinessAction != null) {
TCCResource tccResource = new TCCResource();
tccResource.setActionName(twoPhaseBusinessAction.name());
tccResource.setTargetBean(targetBean);
tccResource.setPrepareMethod(m);
tccResource.setCommitMethodName(twoPhaseBusinessAction.commitMethod());
tccResource.setCommitMethod(ReflectionUtil
.getMethod(interfaceClass, twoPhaseBusinessAction.commitMethod(),
new Class[] {BusinessActionContext.class}));
tccResource.setRollbackMethodName(twoPhaseBusinessAction.rollbackMethod());
tccResource.setRollbackMethod(ReflectionUtil
.getMethod(interfaceClass, twoPhaseBusinessAction.rollbackMethod(),
new Class[] {BusinessActionContext.class}));
//registry tcc resource
DefaultResourceManager.get().registerResource(tccResource);
}
}
} catch (Throwable t) {
throw new FrameworkException(t, "parser remoting service error");
}
}
if (isReference(bean, beanName)) {
//reference bean, TCC proxy
remotingBeanDesc.setReference(true);
}
return remotingBeanDesc;
}

Firstly, determine if it is a transaction participant. If so, obtain the interfaceClass from RemotingDesc, iterate through the methods in the interface, and check if there is a @TwoParserBusinessAction annotation on the method. If found, encapsulate the parameters into TCCResource and register the TCC resource through DefaultResourceManager.

Here, DefaultResourceManager will search for the corresponding resource manager based on the BranchType of the Resource. The resource management class under the TCC mode is in the tcc module.

This RPC parsing class is mainly provided for use by the spring module. parserRemotingServiceInfo() is encapsulated into the TCCBeanParserUtils utility class in the spring module. During project startup, the GlobalTransactionScanner in the spring module parses TCC beans through the utility class. TCCBeanParserUtils calls TCCResourceManager to register resources. If it is a global transaction service provider, it will weave in the TccActionInterceptor proxy. These processes are functionalities of the spring module, where the tcc module provides functional classes for use by the spring module.

Three. TCC Resource Manager

TCCResourceManager is responsible for managing the registration, branching, committing, and rolling back of resources under the TCC mode.

  1. During project startup, when the GlobalTransactionScanner in the spring module detects that a bean is a tcc bean, it caches resources locally and registers them with the server:
    @Override
public void registerResource(Resource resource) {
TCCResource tccResource = (TCCResource)resource;
tccResourceCache.put(tccResource.getResourceId(), tccResource);
super.registerResource(tccResource);
}

The logic for communicating with the server is encapsulated in the parent class AbstractResourceManager. Here, TCCResource is cached based on resourceId. When registering resources in the parent class AbstractResourceManager, resourceGroupId + actionName is used, where actionName is the name specified in the @TwoParseBusinessAction annotation, and resourceGroupId defaults to DEFAULT.

  1. Transaction branch registration is handled in the rm-datasource package under AbstractResourceManager. During registration, the parameter lockKeys is null, which differs from the transaction branch registration under the AT mode.

  2. Committing or rolling back branches:

    @Override
public BranchStatus branchCommit(BranchType branchType, String xid, long branchId, String resourceId,
String applicationData) throws TransactionException {
TCCResource tccResource = (TCCResource)tccResourceCache.get(resourceId);
if (tccResource == null) {
throw new ShouldNeverHappenException("TCC resource is not exist, resourceId:" + resourceId);
}
Object targetTCCBean = tccResource.getTargetBean();
Method commitMethod = tccResource.getCommitMethod();
if (targetTCCBean == null || commitMethod == null) {
throw new ShouldNeverHappenException("TCC resource is not available, resourceId:" + resourceId);
}
try {
boolean result = false;
//BusinessActionContext
BusinessActionContext businessActionContext = getBusinessActionContext(xid, branchId, resourceId,
applicationData);
Object ret = commitMethod.invoke(targetTCCBean, businessActionContext);
if (ret != null) {
if (ret instanceof TwoPhaseResult) {
result = ((TwoPhaseResult)ret).isSuccess();
} else {
result = (boolean)ret;
}
}
return result ? BranchStatus.PhaseTwo_Committed : BranchStatus.PhaseTwo_CommitFailed_Retryable;
} catch (Throwable t) {
LOGGER.error(msg, t);
throw new FrameworkException(t, msg);
}
}

Restore the business context using parameters xid, branchId, resourceId, and applicationData.

Execute the commit method through reflection based on the retrieved context and return the execution result. The rollback method follows a similar approach.

Here, branchCommit() and branchRollback() are provided for AbstractRMHandler, an abstract class for resource processing in the rm module. This handler is a further implementation class of the template method defined in the core module. Unlike registerResource(), which actively registers resources during spring scanning.

Four. Transaction Processing in TCC Mode

The invoke() method of TccActionInterceptor in the spring module is executed when the proxied rpc bean is called. This method first retrieves the global transaction xid passed by the rpc interceptor, and then the transaction process of global transaction participants under TCC mode is still handed over to the ActionInterceptorHandler in the tcc module.

In other words, transaction participants are proxied during project startup. The actual business methods are executed through callbacks in ActionInterceptorHandler.

    public Map<String, Object> proceed(Method method, Object[] arguments, String xid, TwoPhaseBusinessAction businessAction,
Callback<Object> targetCallback) throws Throwable {
Map<String, Object> ret = new HashMap<String, Object>(4);

//TCC name
String actionName = businessAction.name();
BusinessActionContext actionContext = new BusinessActionContext();
actionContext.setXid(xid);
//set action anme
actionContext.setActionName(actionName);

//Creating Branch Record
String branchId = doTccActionLogStore(method, arguments, businessAction, actionContext);
actionContext.setBranchId(branchId);

//set the parameter whose type is BusinessActionContext
Class<?>[] types = method.getParameterTypes();
int argIndex = 0;
for (Class<?> cls : types) {
if (cls.getName().equals(BusinessActionContext.class.getName())) {
arguments[argIndex] = actionContext;
break;
}
argIndex++;
}
//the final parameters of the try method
ret.put(Constants.TCC_METHOD_ARGUMENTS, arguments);
//the final result
ret.put(Constants.TCC_METHOD_RESULT, targetCallback.execute());
return ret;
}

Here are two important operations:

  1. In the doTccActionLogStore() method, two crucial methods are called:
  • fetchActionRequestContext(method, arguments): This method retrieves parameters annotated with @BusinessActionContextParam and inserts them into BusinessActionComtext along with transaction-related parameters in the init method below.
  • DefaultResourceManager.get().branchRegister(BranchType.TCC, actionName, null, xid, applicationContextStr, null): This method performs the registration of transaction branches for transaction participants under TCC mode.
  1. Callback execution of targetCallback.execute(), which executes the specific business logic of the proxied bean, i.e., the prepare() method.

Five. Summary

The tcc module primarily provides the following functionalities:

  1. Defines annotations for two-phase protocols, providing attributes needed for transaction processes under TCC mode.
  2. Provides implementations of ParserRemoting for parsing remoting beans of different RPC frameworks, to be invoked by the spring module.
  3. Provides the TCC ResourceManager for resource registration, transaction branch registration, submission, and rollback under TCC mode.
  4. Provides classes for handling transaction processes under TCC mode, allowing MethodInterceptor proxy classes to delegate the execution of specific mode transaction processes to the tcc module.

Author: Zhao Runze, Series Link.

· One min read

Introduction

Highlights

  • Seata open source project initiator will present "Seata Past, Present and Future" and new features of Seata 1.0.
  • Seata AT, TCC, Saga model explained by Seata core contributors.
  • Seata's Internet Healthcare and DDT's practice analysis.

If you can't make it

Onsite Benefits

  • Speaker's PPT download
  • Tea breaks, Ali dolls, Tmall Genie and other goodies for you to get!

Agenda

Agenda!

· 6 min read

1. Introduction

The core module defines the types and states of transactions, common behaviors, protocols, and message models for communication between clients and servers, as well as exception handling methods, compilation, compression types, configuration information names, environment context, etc. It also encapsulates RPC based on Netty for use by both clients and servers.

Let's analyze the main functional classes of the core module according to the package order:

Image Description

codec: Defines a codec factory class, which provides a method to find the corresponding processing class based on the serialization type. It also provides an interface class Codec with two abstract methods:

<T> byte[] encode(T t);
<T> T decode(byte[] bytes);

1. codec Module

In version 1.0, the codec module has three serialization implementations: SEATA, PROTOBUF, and KRYO.

compressor: Similar to classes under the codec package, there are three classes here: a compression type class, a factory class, and an abstract class for compression and decompression operations. In version 1.0, there is only one compression method: Gzip.

constants: Consists of two classes, ClientTableColumnsName and ServerTableColumnsName, representing the models for transaction tables stored on the client and server sides respectively. It also includes classes defining supported database types and prefixes for configuration information attributes.

context: The environment class RootContext holds a ThreadLocalContextCore to store transaction identification information. For example, TX_XID uniquely identifies a transaction, and TX_LOCK indicates the need for global lock control for local transactions on update/delete/insert/selectForUpdate SQL operations.

event: Utilizes the Guava EventBus event bus for registration and notification, implementing the listener pattern. In the server module's metrics package, MetricsManager registers monitoring events for changes in GlobalStatus, which represents several states of transaction processing in the server module. When the server processes transactions, the callback methods registered for monitoring events are invoked, primarily for statistical purposes.

lock: When the server receives a registerBranch message for branch registration, it acquires a lock. In version 1.0, there are two lock implementations: DataBaseLocker and MemoryLocker, representing database locks and in-memory locks respectively. Database locks are acquired based on the rowKey = resourceId + tableName + pk, while memory locks are based directly on the primary key.

model: BranchStatus, GlobalStatus, and BranchType are used to define transaction types and global/branch states. Additionally, TransactionManager and ResourceManager are abstract classes representing resource managers (RMs) and transaction managers (TMs) respectively. Specific implementations of RMs and TMs are not provided here due to variations in transaction types.

protocol: Defines entity classes used for transmission in the RPC module, representing models for requests and responses under different transaction status scenarios.

store: Defines data models for interacting with databases and the SQL statements used for database interactions.

    public void exceptionHandleTemplate(Callback callback, AbstractTransactionRequest request,
AbstractTransactionResponse response) {
try {
callback.execute(request, response); //执行事务业务的方法
callback.onSuccess(request, response); //设置response返回码
} catch (TransactionException tex) {
LOGGER.error("Catch TransactionException while do RPC, request: {}", request, tex);
callback.onTransactionException(request, response, tex); //设置response返回码并设置msg
} catch (RuntimeException rex) {
LOGGER.error("Catch RuntimeException while do RPC, request: {}", request, rex);
callback.onException(request, response, rex); //设置response返回码并设置msg
}
}

2. Analysis of Exception Handling in the exception Package

This is the UML diagram of AbstractExceptionHandler. Callback and AbstractCallback are internal interfaces and classes of AbstractExceptionHandler. AbstractCallback implements three methods of the Callback interface but leaves the execute() method unimplemented. AbstractExceptionHandler uses AbstractCallback as a parameter for the template method and utilizes its implemented methods. However, the execute() method is left to be implemented by subclasses.

Image Description

From an external perspective, AbstractExceptionHandler defines a template method with exception handling. The template includes four behaviors, three of which are already implemented, and the behavior execution is delegated to subclasses.

3. Analysis of the rpc Package

When it comes to the encapsulation of RPC by Seata, one need not delve into the details. However, it's worth studying how transaction business is handled.

The client-side RPC class is AbstractRpcRemotingClient:

Image Description

The important attributes and methods are depicted in the class diagram. The methods for message sending and initialization are not shown in the diagram. Let's analyze the class diagram in detail:

clientBootstrap: This is a wrapper class for the netty startup class Bootstrap. It holds an instance of Bootstrap and customizes the properties as desired.

clientChannelManager: Manages the correspondence between server addresses and channels using a ConcurrentHashMap<serverAddress,channel> container.

clientMessageListener: Handles messages. Depending on the message type, there are three specific processing methods.

public void onMessage(RpcMessage request, String serverAddress, ClientMessageSender sender) {
Object msg = request.getBody();
if (LOGGER.isInfoEnabled()) {
LOGGER.info("onMessage:" + msg);
}
if (msg instanceof BranchCommitRequest) {
handleBranchCommit(request, serverAddress, (BranchCommitRequest)msg, sender);
} else if (msg instanceof BranchRollbackRequest) {
handleBranchRollback(request, serverAddress, (BranchRollbackRequest)msg, sender);
} else if (msg instanceof UndoLogDeleteRequest) {
handleUndoLogDelete((UndoLogDeleteRequest)msg);
}
}

4. Analysis of the rpc Package (Continued)

Within the message class, the TransactionMessageHandler is responsible for handling messages of different types. Eventually, based on the different transaction types (AT, TCC, SAGE), specific handling classes, as mentioned in the second part, exceptionHandleTemplate(), are invoked.

mergeSendExecutorService: This is a thread pool with only one thread responsible for merging and sending messages from different addresses. In the sendAsyncRequest() method, messages are offered to the queue LinkedBlockingQueue of the thread pool. The thread is then responsible for polling and processing messages.

channelRead(): Handles server-side HeartbeatMessage.PONG heartbeat messages. Additionally, it processes MergeResultMessage, which are response messages for asynchronous messages. It retrieves the corresponding MessageFuture based on the msgId and sets the result of the asynchronous message.

dispatch(): Invokes the clientMessageListener to handle messages sent by the server. Different types of requests have different handling classes.

In summary, when looking at Netty, one should focus on serialization methods and message handling handler classes. Seata's RPC serialization method is processed by finding the Codec implementation class through the factory class, and the handler is the TransactionMessageHandler mentioned earlier.

5. Conclusion

The core module covers a wide range of functionalities, with most classes serving as abstract classes for other modules. Business models are abstracted out, and specific implementations are distributed across different modules. The code in the core module is of high quality, with many classic design patterns such as the template pattern discussed earlier. It is very practical and well-crafted, deserving careful study.

Series Links

· 4 min read

Dynamically create/close Seata distributed transactions through AOP

This article was written by FUNKYE (Chen Jianbin), the main programmer of an Internet company in Hangzhou.

Through the GA conference on the senior R & D engineering Chen Pengzhi drop trip in the drop two-wheeler business practice, found that the need for dynamic degradation is very high, so this simple use of spring boot aop to simply deal with degradation of the relevant processing, this is very thankful to Chen Pengzhi's sharing!

can use this demo project address

through the following code transformation practice .

Preparation

  1. Create a TestAspect for testing.
package org.test.config;

import java.lang.reflect.

import org.apache.commons.lang3.StringUtils; import org.aspectj.
import org.aspectj.lang.JoinPoint; import org.aspectj.lang.
import org.aspectj.lang.annotation.AfterReturning; import org.aspectj.lang.annotation.
import org.aspectj.lang.annotation.AfterThrowing; import org.aspectj.lang.annotation.
import org.aspectj.lang.JoinPoint.import org.aspectj.annotation.AfterReturning; import org.aspectj.lang.annotation.
import org.aspectj.lang.annotation.
import org.aspectj.lang.reflect.MethodSignature; import org.aspectj.lang.reflect.
import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.reflect.
import org.slf4j.LoggerFactory; import org.springframework.
import org.springframework.stereotype.Component; import org.springframework.stereotype.

import io.seata.core.context.
import io.seata.core.exception.TransactionException; import io.seata.core.exception.
import io.seata.tm.api.GlobalTransaction; import io.seata.tm.api.
import io.seata.tm.api.GlobalTransactionContext; import io.seata.tm.api.

@Aspect
GlobalTransactionContext; @Aspect
public class TestAspect {
private final static Logger logger = LoggerFactory.getLogger(TestAspect.class); @Before("execution"); @Before("execution")

@Before("execution(* org.test.service.*. *(...))")
public void before(JoinPoint joinPoint) throws TransactionException {
MethodSignature signature = (MethodSignature)joinPoint.getSignature();
Method method = signature.getMethod(); logger.info
logger.info("Intercepted method that requires a distributed transaction, " + method.getName()); // Use redis or redis.getName() here.
// Here you can use redis or a timed task to get a key to determine if the distributed transaction needs to be closed.
// Simulate a dynamic shutdown of a distributed transaction
if ((int)(Math.random() * 100) % 2 == 0) {
GlobalTransaction tx = GlobalTransactionContext.getCurrentOrCreate();
tx.begin(300000, "test-client");
} else {
logger.info("Closing distributed transaction"); }
}
}

@AfterThrowing(throwing = "e", pointcut = "execution(* org.test.service. *(...))")
public void doRecoveryActions(Throwable e) throws TransactionException {
logger.info("Method Execution Exception: {}", e.getMessage());
if (!StringUtils.isBlank(RootContext.getXID()))
GlobalTransactionContext.reload(RootContext.getXID()).rollback();
}

@AfterReturning(value = "execution(* org.test.service.*. *(...))" , returning = "result")
public void afterReturning(JoinPoint point, Object result) throws TransactionException {
logger.info("End of method execution: {}", result);
if ((Boolean)result) {
if (!StringUtils.isBlank(RootContext.getXID())) {
logger.info("DistributedTransactionId:{}", RootContext.getXID());
GlobalTransactionContext.reload(RootContext.getXID()).commit();
}
}
}

}

Please note that the package name above can be changed to your own service package name: ``.

  1. Change the service code.
    public Object seataCommit() {
testService.Commit(); return true; return true; testService.Commit(); testService.Commit()
testService.Commit(); return true; }
}

Because of the exception and return results we will intercept, so this side can trycatch or directly let him throw an exception to intercept the line, or directly judge the return results, such as your business code code = 200 for success, then the commit, and vice versa in the interception of the return value of that section of the code plus rollback; # Debugging.

Debugging

  1. Change the code to actively throw exceptions
    public Object seataCommit() {
try {
testService.Commit();
testService.Commit(); int i = 1 / 0; return true; return
return true; } catch (Exception e) { testService.
} catch (Exception e) {
// TODO: handle exception
throw new RuntimeException(); } catch (Exception e) { // TODO: handle exception.
}
}

View log:

2019-12-23 11:57:55.386 INFO 23952 --- [.0-28888-exec-7] org.test.controller.TestController : Intercepted method requiring distributed transaction, seataCommit
2019-12-23 11:57:55.489 INFO 23952 --- [.0-28888-exec-7] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.14.67 :8092:2030765910]
2019-12-23 11:57:55.489 INFO 23952 --- [.0-28888-exec-7] org.test.controller.TestController : Creating distributed transaction complete 192.168.14.67 :8092:2030765910
2019-12-23 11:57:55.709 INFO 23952 --- [.0-28888-exec-7] org.test.controller.TestController : Method execution exception:null
2019-12-23 11:57:55.885 INFO 23952 --- [.0-28888-exec-7] i.seata.tm.api.DefaultGlobalTransaction : [192.168.14.67:8092:2030765910] rollback status: Rollbacked
2019-12-23 11:57:55.888 ERROR 23952 --- [.0-28888-exec-7] o.a.c.c.C. [. [. [/]. [dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.RuntimeException] with root cause

You can see that it has been intercepted and triggered a rollback.

  1. Restore the code to debug the normal situation:
    public Object seataCommit() {
testService.Commit(); testService.Commit(); testService.Commit(); testService.Commit()
testService.Commit(); return true; }
}

Viewing logs.

2019-12-23 12:00:20.876 INFO 23952 --- [.0-28888-exec-2] org.test.controller.TestController : Intercepted method requiring distributed transaction, seataCommit
2019-12-23 12:00:20.919 INFO 23952 --- [.0-28888-exec-2] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.14.67 :8092:2030765926]
2019-12-23 12:00:20.920 INFO 23952 --- [.0-28888-exec-2] org.test.controller.TestController : Creating distributed transaction complete 192.168.14.67 :8092:2030765926
2019-12-23 12:00:21.078 INFO 23952 --- [.0-28888-exec-2] org.test.controller.TestController : End of method execution:true
2019-12-23 12:00:21.078 INFO 23952 --- [.0-28888-exec-2] org.test.controller.TestController : Distributed transaction Id:192.168.14.67:8092:2030765926
2019-12-23 12:00:21.213 INFO 23952 --- [.0-28888-exec-2] i.seata.tm.api.DefaultGlobalTransaction : [192.168.14.67:8092:2030765926] commit status: Committed

You can see that the transaction has been committed.

Summary

For more details, we hope you will visit the following address to read the detailed documentation.

nacos website

dubbo website

seata official website

docker official website

· 6 min read

Seata's dynamic degradation needs to be combined with the dynamic configuration subscription feature of the configuration centre. Dynamic configuration subscription, that is, through the configuration centre to listen to the subscription, according to the need to read the updated cache value, ZK, Apollo, Nacos and other third-party configuration centre have ready-made listener can be achieved dynamic refresh configuration; dynamic degradation, that is, by dynamically updating the value of the specified configuration parameter, so that Seata can be dynamically controlled in the running process of the global transaction invalidated (at present, only the AT mode has). (currently only AT mode has this feature).

So how do the multiple configuration centres supported by Seata adapt to different dynamic configuration subscriptions and how do they achieve degradation? Here is a detailed explanation from the source code level.

Dynamic Configuration Subscriptions

The Seata Configuration Centre has a listener baseline interface, which has an abstract method and default method, as follows:

io.seata.config.ConfigurationChangeListener

This listener baseline interface has two main implementation types:

  1. implementation of the registration of configuration subscription event listener: for the implementation of a variety of functions of dynamic configuration subscription, such as GlobalTransactionalInterceptor implements ConfigurationChangeListener, according to the dynamic configuration subscription to the dynamic degradation of the implementation of the function;
  2. the implementation of the configuration centre dynamic subscription function and adaptation: for the file type default configuration centre that currently does not have dynamic subscription function, you can implement the benchmark interface to achieve dynamic configuration subscription function; for the blocking subscription needs to start another thread to execute, then you can implement the benchmark interface for adaptation, you can also reuse the thread pool of the benchmark interface; and there are also asynchronous subscription, there is subscription to a single key, there is subscription to multiple keys, and so on. key, multiple key subscriptions, and so on, we can implement the baseline interface to adapt to each configuration centre.

Nacos Dynamic Subscription Implementation

Nacos has its own internal implementation of the listener, so it directly inherits its internal abstract listener, AbstractSharedListener, which is implemented as follows:

As above.

  • dataId: configuration attribute for the subscription;
  • listener: configures the subscription event listener, which is used to use the incoming listener as a wrapper to perform the actual change logic.

It's worth mentioning that nacos doesn't use ConfigurationChangeListener to implement its own listener configuration, on the one hand, because Nacos itself already has a listener subscription function, so it doesn't need to implement it; on the other hand, because nacos is a non-blocking subscription, it doesn't need to reuse the ConfigurationChangeListener's thread pool, i.e., no adaptation is needed.

Add the subscription:

The logic of adding a subscription to a dataId in Nacos Configuration Centre is very simple, create a NacosListener with the dataId and a listener, call the configService#addListener method, and use the NacosListener as a listener for the dataId, and then the dataId can be dynamically configured for subscription. Dynamic Configuration Subscription.

file Dynamic subscription implementation

Take its implementation class FileListener as an example, its implementation logic is as follows:

As above.

  • dataId: configuration attribute for the subscription;

  • listener: the configuration subscription event listener, used as a wrapper for the incoming listener to perform the real change logic, it is important to note that ** this listener and FileListener also implement the ConfigurationChangeListener interface, except that FileListener is used to provide dynamic configuration subscription to the file, while listener is used to execute configuration subscription events**;

  • executor: a thread pool used for processing configuration change logic, used in the ConfigurationChangeListener#onProcessEvent method.

The implementation of the FileListener#onChangeEvent method gives the file the ability to subscribe to dynamic configurations with the following logic:

It loops indefinitely to get the current value of the subscribed configuration property, fetches the old value from the cache, determines if there is a change, and executes the logic of the external incoming listener if there is a change.

ConfigurationChangeEvent The event class used to save configuration changes, it has the following member properties:

How does the getConfig method sense changes to the file configuration? We click into it and find that it ends up with the following logic:

We see that it creates a future class, wraps it in a Runnable and puts it into the thread pool to execute asynchronously, and then calls the get method to block the retrieval of the value, so let's move on:

allowDynamicRefresh: configure switch for dynamic refresh;

targetFileLastModified: time cache of the last change to the file.

The above logic:

Get the tempLastModified value of the last update of the file, then compare it with the targetFileLastModified value, if tempLastModified > targetFileLastModified, it means that the configuration has been changed in the meantime. instance is reloaded, replacing the old fileConfig so that later operations can get the latest configuration values.

The logic for adding a configuration property listener is as follows:

configListenersMap is a configuration listener cache for FileConfiguration with the following data structure:

ConcurrentMap<String/*dataId*/, Set<ConfigurationChangeListener>> configListenersMap

As you can see from the data structure, each configuration property can be associated with multiple event listeners.

Eventually the onProcessEvent method is executed, which is the default method in the listener's base interface, and it calls the onChangeEvent method, which means that it will eventually call the implementation in the FileListener.

Dynamic Degradation

With the above dynamic configuration subscription functionality, we only need to implement the ConfigurationChangeListener listener to do all kinds of functionality. Currently, Seata only has dynamic degradation functionality for dynamic configuration subscription.

In the article 「Seata AT mode startup source code analysis」, it is said that in the project of Spring integration with Seata, when AT mode is started, it will use the GlobalTransactionalInterceptor replaces the methods annotated with GlobalTransactional and GlobalLock. GlobalTransactionalInterceptor implements MethodInterceptor, which will eventually execute the invoker method, so if you want to achieve dynamic demotion, you can do something here.

  • Add a member variable to GlobalTransactionalInterceptor:
private volatile boolean disable; ``java

Initialise the assignment in the constructor:

ConfigurationKeys.DISABLE_GLOBAL_TRANSACTION (service.disableGlobalTransaction) This parameter currently has two functions:

  1. to determine whether to enable global transactions at startup;
  2. to decide whether or not to demote a global transaction after it has been enabled.
  • Implement ConfigurationChangeListener:

The logic here is simple, it is to determine whether the listening event belongs to the ConfigurationKeys.DISABLE_GLOBAL_TRANSACTION configuration attribute, if so, directly update the disable value.

  • Next, do something in GlobalTransactionalInterceptor#invoke

As above, when disable = true, no global transaction with global lock is performed.

  • Configuration Centre Subscription Degradation Listener

io.seata.spring.annotation.GlobalTransactionScanner#wrapIfNecessary

The current Configuration Centre will subscribe to the demotion event listener during wrap logic in Spring AOP.

Author Bio

Zhang Chenghui, currently working in the technology platform department of Zhongtong Technology Information Centre as a Java engineer, mainly responsible for the development of Zhongtong messaging platform and all-links pressure testing project, love to share technology, WeChat public number "back-end advanced" author, technology blog (https://objcoding.com/) Blogger, Seata Contributor, GitHub ID: objcoding.

· 6 min read

Seata can support multiple third-party configuration centres, so how is Seata compatible with so many configuration centres at the same time? Below I will give you a detailed introduction to the principle of Seata Configuration Centre implementation.

Configuration Centre Property Loading

In Seata Configuration Centre, there are two default configuration files:

!

file.conf is the default configuration properties, and registry.conf mainly stores third-party registry and configuration centre information, and has two main blocks:

registry {
# file, nacos, eureka, redis, zk, consul, etcd3, sofa
# ...
}

config {
# file, nacos , apollo, zk, consul, etcd3
type = "file"
nacos {
serverAddr = "localhost"
namespace = ""
}
file {
name = "file.conf"
}
# ...
}

The registry is the configuration attribute of the registry, which is not mentioned here, and the config is the value of the attribute of the configuration centre, which is of type file by default, i.e., it will load the attributes inside the local file.conf, and if the type is of any other type, it will load the value of the configuration attribute from the third-party configuration centre.

In the core directory of the config module, there is a configuration factory class ConfigurationFactory, which has the following structure:

!

You can see that there are some static constants for configuration:

REGISTRY_CONF_PREFIX, REGISTRY_CONF_SUFFIX: the name of the configuration file, the default configuration file type;

SYSTEM_PROPERTY_SEATA_CONFIG_NAME, ENV_SEATA_CONFIG_NAME, ENV_SYSTEM_KEY, ENV_PROPERTY_KEY: custom filename configuration variables, which also indicates that we can customise the configuration centre's property files.

There is a static code block inside ConfigurationFactory as follows:

io.seata.config.ConfigurationFactory

io.seata.config.ConfigurationFactory !

According to the custom file name configuration variable to find out the name and type of configuration file, if not configured, the default use registry.conf, FileConfiguration is the default configuration implementation class of Seata, if the default value, it will be more registry.conf configuration file to generate the FileConfiguration default configuration object, here you can also use the SPP configuration centre. Configuration object, here you can also use the SPI mechanism to support third-party extended configuration implementation, the specific implementation is to inherit the ExtConfigurationProvider interface, create a file in META-INF/services/ and fill in the full path name of the implementation class, as shown below:

!

Third-party configuration centre implementation class loading

After the static code block logic loads the configuration centre properties, how does Seata select the configuration centre and get the configuration centre property values?

As we just said FileConfiguration is the default configuration implementation class for Seata, it inherits from AbstractConfiguration, which has a base class Configuration and provides methods to get parameter values:

short getShort(String dataId, int defaultValue, long timeoutMills);
int getInt(String dataId, int defaultValue, long timeoutMills);
long getLong(String dataId, long defaultValue, long timeoutMills); int getInt(String dataId, int defaultValue, long timeoutMills); long getLong(String dataId, long defaultValue, long timeoutMills); //
// ....

So that means that all that is needed is for a third party configuration centre to implement this interface and integrate into the Seata Configuration Centre, I'll use zk as an example below:

First, the third-party configuration centre needs to implement a Provider class:

!

The provider method, as its name suggests, mainly outputs a specific Configuration implementation class.

So how do we get the corresponding third-party Configuration Centre implementation class based on the configuration?

In the Seata project, this is how to get a third-party Configuration Centre implementation:

Configuration CONFIG = ConfigurationFactory.getInstance(); ``java

In the getInstance() method the singleton pattern is mainly used to construct the configuration implementation class, which is constructed as follows:

io.seata.configuration.ConfigurationFactory#buildConfiguration:

!

First of all, the static code block in ConfigurationFactory gets the configuration centre used by the current environment from the CURRENT_FILE_INSTANCE created by registry.conf, which is of type File by default. We can also configure other third-party configuration centres in registry.conf. We can also configure other third-party configuration centers in registry.conf. Here, we also use the SPI mechanism to load the implementation class of the third-party configuration centre, the specific implementation is as follows:

!

As above, that is what I just said ZookeeperConfigurationProvider configuration implementation output class, let's take a look at this line of code:

EnhancedServiceLoader.load(ConfigurationProvider.class,Objects.requireNonNull(configType).name()).provide();
``

The EnhancedServiceLoader is the core class of the Seata SPI implementation, and this line of code loads the class names of the files in the `META-INF/services/` and `META-INF/seata/` directories, so what happens if more than one of these Configuration Centre implementation classes are loaded?

We notice that the ZookeeperConfigurationProvider class has an annotation above it:

```java
@LoadLevel(name = "ZK", order = 1)

When loading multiple Configuration Centre implementation classes, they are sorted according to order:

io.seata.common.loader.EnhancedServiceLoader#findAllExtensionClass:

!

io.seata.common.loader.EnhancedServiceLoader#loadFile:

!

In this way, there is no conflict.

But we find that Seata can also use this method for selection, and Seata passes a parameter when calling the load method:

Objects.requireNonNull(configType).name()

ConfigType is the configuration centre type, which is an enumerated class:

public enum ConfigType {
File, ZK, Nacos, Apollo, Consul, Etcd3, SpringCloudConfig, Custom.
}

We notice that there is also a name attribute on the LoadLevel annotation, which Seata also does when filtering implementation classes:

!

If the name is equal to LoadLevel's name attribute, then it is the currently configured third-party configuration centre implementation class.

Third-party configuration centre implementation class

ZookeeperConfiguration inherits AbstractConfiguration and has the following constructor:

!

The constructor creates a zkClient object, what is FILE_CONFIG here?

private static final Configuration FILE_CONFIG = ConfigurationFactory.CURRENT_FILE_INSTANCE;

It turns out to be the registry.conf configuration implementation class created in the static code block, from which you get the properties of the third-party Configuration Centre, construct the third-party Configuration Centre client, and then implement the Configuration interface:

!

Then you can use the relevant methods of the client to get the corresponding parameter values from the third-party configuration.

Third-party configuration centre configuration synchronization script

I wrote it last weekend and submitted it to PR, it's still under review, and it's expected to be available in Seata 1.0, so please look forward to it.

It's located in the script directory of the Seata project:

!

config.txt is a locally configured value, after setting up the third-party configuration centre, running the script will sync the config.txt configuration to the third-party configuration centre.

Author's Bio

Zhang Chenghui, currently working in the technology platform department of the information centre of Zhongtong Technology, as a Java engineer, mainly responsible for the development of the Zhongtong messaging platform and the all-links pressure test project, loves to share technology, author of WeChat's public number "Backend Advancement", and technology blog (https://objcoding.com/) Blogger, Seata Contributor, GitHub ID: objcoding.

· 13 min read

Running the demo used project address

Author: FUNKYE (Chen Jianbin), Hangzhou, an Internet company main program.

Preface

Seata configuration for direct connection blog

Seata Integration with Nacos Configuration blog

Let's go back to the basics of the previous posts to configure nacos as a configuration centre and dubbo registry.

Preparation

  1. Install docker
yum -y install docker
  1. Create the nacos and seata databases.
/******************************************/
/* Full database name = nacos */
/* Table name = config_info */
/******************************************/
CREATE TABLE `config_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `data_id` v
`data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) AUTO_INCREMENT COMMENT
`group_id` varchar(255) DEFAULT NULL, `content` longtext NOT NULL
`content` longtext NOT NULL COMMENT 'content', `md5` varchar(255)
`md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` longtext NOT NULL COMMENT
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Creation time',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Modified', `src_user` datetime NOT NULL
`src_user` text COMMENT 'source user',
`src_ip` varchar(20) DEFAULT NULL COMMENT 'source ip', `app_name` varchar(20) DEFAULT NULL COMMENT '2010-05-05 00:00:00' COMMENT
`app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(20)
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant field',
`c_desc` varchar(256) DEFAULT NULL,
`c_use` varchar(64) DEFAULT NULL, `c_desc` varchar(256) DEFAULT
`effect` varchar(64) DEFAULT NULL,
`type` varchar(64) DEFAULT NULL,
`c_schema` text, `c_schema` text, `c_schema` text
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';

/******************************************/
/* Full database name = nacos_config */
/* Table name = config_info_aggr */
/******************************************/
CREATE TABLE `config_info_aggr` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) AUTO_INCREMENT COMMENT
`group_id` varchar(255) NOT NULL COMMENT 'group_id', `datum_id` varchar(255) NOT NULL COMMENT
`datum_id` varchar(255) NOT NULL COMMENT 'datum_id', `content` longtext NOT NULL COMMENT 'data_id', `group_id` varchar(255)
`content` longtext NOT NULL COMMENT '内容',
`gmt_modified` datetime NOT NULL COMMENT 'modification time', `app_name` varchar(255) NOT NULL COMMENT
`app_name` varchar(128) DEFAULT NULL, `tenant_id` varchar(128) COMMENT
`tenant_id` varchar(128) DEFAULT '' COMMENT 'Tenant field',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Add tenant field';


/******************************************/
/* Full database name = nacos_config */
/* Table name = config_info_beta */
/******************************************/
CREATE TABLE `config_info_beta` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) AUTO_INCREMENT COMMENT
`group_id` varchar(128) NOT NULL COMMENT 'group_id', `app_name` varchar(255) NOT NULL COMMENT
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name', `content` longtext NOT NULL
`content` longtext NOT NULL COMMENT 'content', `beta_ips` varchar(128)
`beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps', `md5` varchar(1024) DEFAULT NULL COMMENT
`md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create` varchar(1024) DEFAULT NULL COMMENT
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Creation Time',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Modified', `src_user` datetime NOT NULL
`src_user` text COMMENT 'source user',
`src_ip` varchar(20) DEFAULT NULL COMMENT 'source ip', `tenant_id` varchar(20) DEFAULT NULL COMMENT '2010-05-05 00:00:00' COMMENT
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant field',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';

/******************************************/
/* Full database name = nacos_config */
/* Table name = config_info_tag */
/******************************************/
CREATE TABLE `config_info_tag` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) AUTO_INCREMENT COMMENT
`group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(255) NOT NULL COMMENT
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `tag_id` varchar(128) DEFAULT
`tag_id` varchar(128) NOT NULL COMMENT 'tag_id', `app_name` varchar(128) DEFAULT '' COMMENT
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`content` longtext NOT NULL COMMENT 'content', `md5` varchar(128) DEFAULT NULL COMMENT
`md5` varchar(32) DEFAULT NULL COMMENT 'md5', `gmt_create
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Creation time',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Modified', `src_user` datetime NOT NULL
`src_user` text COMMENT 'source user',
`src_ip` varchar(20) DEFAULT NULL COMMENT 'source ip', `src_user` text COMMENT
PRIMARY KEY (`id`), UNIQUE KEY `src_ip` varchar(20)
UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';

/******************************************/
/* Full database name = nacos_config */
/* Table name = config_tags_relation */
/******************************************/
CREATE TABLE `config_tags_relation` (
`id` bigint(20) NOT NULL COMMENT 'id', `tag_name` v
`tag_name` varchar(128) NOT NULL COMMENT 'tag_name', `tag_type` varchar(20) NOT NULL COMMENT
`tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',
`data_id` varchar(255) NOT NULL COMMENT 'data_id', `group_id` varchar(255) DEFAULT COMMENT
`group_id` varchar(128) NOT NULL COMMENT 'group_id', `tenant_id` varchar(255) NOT NULL COMMENT
`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id', `nid` bigint(128) NOT NULL COMMENT
`nid` bigint(20) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`nid`),
UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';

/******************************************/
/* Full database name = nacos_config */
/* Table name = group_capacity */
/******************************************/
CREATE TABLE `group_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Primary key ID',
`group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID, null character indicates entire cluster', `quota` int(10)
`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Quota, a 0 indicates that the default value is being used',
`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Usage',
`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Individual configuration size limit in bytes, 0 means use the default',
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum number of aggregate subconfigurations,, 0 means use default',, `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0'
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum subconfiguration size in bytes for a single aggregated data,, 0 means use default',, `max_history_size` int(10) unsigned NOT NULL DEFAULT '0'
`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum number of change history counts',
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Creation Time',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Modified time',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Cluster, Capacity Information Table by Group';

/******************************************/
/* Full database name = nacos_config */
/* Table name = his_config_info */
/******************************************/
CREATE TABLE `his_config_info` (
`id` bigint(64) unsigned NOT NULL, `nid` bigint(64) unsigned NOT NULL, `his_config_info
`nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `data_id` v
`data_id` varchar(255) NOT NULL, `group_id` varchar(255) NOT NULL, `group_id` varchar(255) NOT NULL, `group_id` varchar(255) NOT NULL
`group_id` varchar(128) NOT NULL, `app_name` varchar(255)
`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
`content` longtext NOT NULL, `md5` varchar(128) DEFAULT NULL
`md5` varchar(32) DEFAULT NULL, `gmt_create
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00', `src_user` datetime NOT NULL
`src_user` text, `src_ip` datetime NOT NULL
`src_ip` varchar(20) DEFAULT NULL, `op_type` char(20) DEFAULT
`op_type` char(10) DEFAULT NULL, `tenant_id` varchar(20)
`tenant_id` varchar(128) DEFAULT '' COMMENT 'Tenant field',
PRIMARY KEY (`nid`),
KEY `idx_gmt_create` (`gmt_create`),
KEY `idx_gmt_modified` (`gmt_modified`),
KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Multi-Tenant Transformation';


/******************************************/
/* Full database name = nacos_config */
/* Table name = tenant_capacity */
/******************************************/
CREATE TABLE `tenant_capacity` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Primary key ID', `tenant_id` v
`tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Quota, 0 means use the default value',
`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Usage',
`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Individual configuration size limit in bytes, 0 means use the default',
`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum number of aggregated sub-configurations',
`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum subconfiguration size in bytes for a single aggregation data, 0 means use default',
`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT 'Maximum number of change history counts',
`gmt_create` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Creation time',
`gmt_modified` datetime NOT NULL DEFAULT '2010-05-05 00:00:00' COMMENT 'Modified time',
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Tenant capacity information table';


CREATE TABLE `tenant_info` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
`kp` varchar(128) NOT NULL COMMENT 'kp', `tenant_id` varchar(20) AUTO_INCREMENT COMMENT
`tenant_id` varchar(128) default '' COMMENT 'tenant_id',
`tenant_name` varchar(128) default '' COMMENT 'tenant_name',
`tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc', `create_source` varchar(256)
`create_source` varchar(32) DEFAULT NULL COMMENT 'create_source', `gmt_create` varchar(256) DEFAULT NULL COMMENT
`gmt_create` bigint(20) NOT NULL COMMENT 'create_time', `gmt_modify` varchar(32) DEFAULT NULL COMMENT
`gmt_modified` bigint(20) NOT NULL COMMENT 'modified_time', `gmt_modified` bigint(20) NOT NULL COMMENT
PRIMARY KEY (`id`),
UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';

CREATE TABLE users (
username varchar(50) NOT NULL PRIMARY KEY, password varchar(500) NOT NULL
password varchar(500) NOT NULL,
enabled boolean NOT NULL
NULL, enabled boolean NOT NULL ); CREATE TABLE users

CREATE TABLE roles (
username varchar(50) NOT NULL, role varchar(50) NOT NULL, enabled boolean NOT NULL ); CREATE TABLE
role varchar(50) NOT NULL
); INSERT INTO users (username varchar(50) NOT NULL, role varchar(50) NOT NULL ); CREATE TABLE

INSERT INTO users (username, password, enabled) VALUES ('nacos', '$2a$10$EuWPZHzz32dJN7jexM34MOeYirDdFAZm2kuWj7VEOJhhZkDrxfvUu', TRUE);; CREATE TABLE roles ( username, role)

INSERT INTO roles (username, role) VALUES ('nacos', 'ROLE_ADMIN');

-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
`xid` VARCHAR(128) NOT NULL, `transaction_id` BARCHAR(128)
`transaction_id` BIGINT, `status` TINYL
`status` TINYINT NOT NULL,
`application_id` VARCHAR(32), `transaction_service
`transaction_service_group` VARCHAR(32),
`transaction_name` VARCHAR(128),
`timeout` INT,
`begin_time` BIGINT,
`application_data` VARCHAR(2000), `gmt_create
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`xid`),
KEY `idx_gmt_modified_status` (`gmt_modified`, `status`),
KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8.

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
`branch_id` BIGINT NOT NULL, `xid` VARCHARGE
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`resource_group_id` VARCHAR(32), `resource_id` VARCHAR(32), `transaction_id` BIGINT
`resource_id` VARCHAR(256),
`branch_type` VARCHAR(8), `status` TINYINT
`status` TINYINT,
`client_id` VARCHAR(64), `application_data` TINYINT, `client_id` VARCHAR(64), `application_data` TINYINT
`application_data` VARCHAR(2000), `gmt_create
`gmt_create` DATETIME(6),
`gmt_modified` DATETIME(6),
PRIMARY KEY (`branch_id`), `branch_id`, `idx_x
KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8; -- the table to store lock data.

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
`row_key` VARCHAR(128) NOT NULL, `xid` VARCHAR(128) NOT NULL, -- the table to store lock data
`xid` VARCHAR(128),
`transaction_id` BIGINT, `branch_id` BIGINT, `branch_id` BIGINT
`branch_id` BIGINT NOT NULL, `resource_id` VARCHAR(128)
`resource_id` VARCHAR(256),
`table_name` VARCHAR(32),
`pk` VARCHAR(36), `gmt_create` VARCHAR(256), `gmt_create
`gmt_create` DATETIME, `gmt_modify` VARCHAR(256), `pk` VARCHAR(36), `gmt_create` DATETIME
`gmt_modified` DATETIME,
PRIMARY KEY (`row_key`),
KEY `idx_branch_id` (`branch_id`)
) ENGINE = InnoDB
DEFAULT CHARSET = utf8.

  1. Pull the nacos and seata mirrors and run them.
docker run -d --name nacos -p 8848:8848 -e MODE=standalone -e MYSQL_MASTER_SERVICE_HOST=your mysql ip -e MYSQL_MASTER_SERVICE_DB_NAME=nacos -e MYSQL_MASTER_SERVICE_USER=root -e MYSQL_MASTER_SERVICE_PASSWORD=mysql password -e MYSQL_SLAVE_SERVICE_HOST=your mysql ip -e SPRING_DATASOURCE_PLATFORM=mysql PLATFORM=mysql -e MYSQL_DATABASE_NUM=1 nacos/nacos-server:latest
docker run -d --name seata -p 8091:8091 -e SEATA_IP=the ip you want to specify -e SEATA_PORT=8091 seataio/seata-server:1.4.2

Seata Configuration

  1. Since there is no built-in vim in the seata container, we can directly cp the folder to the host and then cp it to go back.
docker cp container id:seata-server/resources The directory you want to place the folder in.
  1. Get the ip addresses of the two containers using the following code
docker inspect --format='{{.NetworkSettings.IPAddress}}' ID/NAMES
  1. nacos-config.txt is edited as follows
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.Your transaction group name=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
store.mode=file
store.publicKey=
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.
store.db.url=jdbc:mysql://your mysql host ip:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=mysql account
store.db.password=mysql password
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialisation=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
log.exceptionRate=100
transport.serialisation=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

Click here for detailed parameter configurations.

  1. registry.conf is edited as follows
registry {
# file, nacos, eureka, redis, zk, consul, etcd3, sofa
type = "nacos"

nacos {
serverAddr = "nacos container ip:8848"
namespace = ""
cluster = "default"
}
}

config {
# file, nacos, apollo, zk, consul, etcd3
type = "nacos"

nacos {
serverAddr = "nacos container ip:8848"
namespace = ""
}
}
  1. After the configuration is complete, use the following command to copy the modified registry.conf to the container, and reboot to view the logs running
docker cp /home/seata/resources/registry.conf seata:seata-server/resources/
docker restart seata
docker logs -f seata
  1. Run nacos-config.sh to import the Nacos configuration.

eg: sh ${SEATAPATH}/script/config-center/nacos/nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP -t 5a3c7d6c-f497-4d68-a71a-2e5e3340b3ca - u username -w password u username -w password

Refer to Configuration Import Instructions for specific parameter definitions.

  1. Log in to the nacos control centre to see

! 20191202205912

As shown in the picture is successful.

Debugging

  1. Pull the project shown in the blog post and modify the application.yml and registry.conf of test-service.
registry {
type = "nacos"
nacos {
serverAddr = "host ip:8848"
namespace = ""
cluster = "default"
}
}
config {
type = "nacos"
nacos {
serverAddr = "host ip:8848"
namespace = ""
cluster = "default"
}
}

server.
port: 38888
spring.
name: test-service
name: test-service
datasource: type: com.alibaba.druid.pool.
type: com.alibaba.druid.pool.
url: jdbc:mysql://mysqlip:3306/test?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC
driver-class-name: com.mysql.cj.jdbc.
driver-class-name: com.mysql.cj.jdbc.
driver-class-name: com.mysql.cj.jdbc.driver username: root
driver-class-name: com mysql.cj.jdbc.
driver-class-name: com mysql.cj.jdbc.
threadpool: cached
scan.
base-packages: com.example
application: qos-enable: false
qos-enable: false
name: testserver
registry: id: my-registry
id: my-registry
address: nacos://host ip:8848
mybatis-plus.
mapper-locations: classpath:/mapper/*Mapper.xml
typeAliasesPackage: org.test.entity
global-config.
db-config.
field-strategy: not-empty
db-config: field-strategy: not-empty
db-type: mysql
configuration: map-underscore-to-camel-case: true
map-underscore-to-camel-case: true
cache-enabled: true
log-impl: org.apache.ibatis.logging.stdout.
auto-mapping-unknown-column-behavior: none
  1. Copy the modified registry.conf to test-client-resources, and modify the application
spring: application.
application: name: test
name: test
datasource: driver-class-name: com.mysql.
driver-class-name: com.mysql.cj.jdbc.
url: jdbc:mysql://mysqlIp:3306/test?userSSL=true&useUnicode=true&characterEncoding=UTF8&serverTimezone=Asia/Shanghai
username: root
password: 123456
mvc.
servlet.
load-on-startup: 1
http.
http: http: encoding: http: encoding: http: force: true
force: true
charset: utf-8
enabled: true
multipart: max-file-size: 10MB
max-file-size: 10MB
max-request-size: 10MB
dubbo.
dubbo: registry: id: my-registry
id: my-registry
address: nacos://host ip:8848
application.
name: dubbo-demo-client
qos-enable: false
server: name: dubbo-demo-client qos-enable: false
port: 28888
max-http-header-size: 8192
address: 0.0.0.0
tomcat: max-http-post-size: 104857600
max-http-post-size: 104857600
  1. Execute the undo_log script on each db involved.
CREATE TABLE IF NOT EXISTS `undo_log`
(
`branch_id` BIGINT NOT NULL COMMENT 'branch transaction id', `xid` VARCHARCHARCHARCHARCHARCHARCHARCHARCHARGE
`xid` VARCHAR(128) NOT NULL COMMENT 'global transaction id', `context` VARCHAR(128) NOT NULL COMMENT
`context` VARCHAR(128) NOT NULL COMMENT 'undo_log context,such as serialisation', `rollback_info` VARCHAR(128) NOT NULL COMMENT
`rollback_info` LONGBLOB NOT NULL COMMENT 'rollback info', `log_status` INTRODUCTION
`log_status` INT(11) NOT NULL COMMENT '0:normal status,1:defence status', `log_created` DAT
`log_created` DATETIME(6) NOT NULL COMMENT 'creation datetime', `log_modified` DATETIME(6) NOT NULL COMMENT
`log_modified` DATETIME(6) NOT NULL COMMENT 'modify datetime', `log_modified` DATETIME(6) NOT NULL COMMENT
UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)
) ENGINE = InnoDB
AUTO_INCREMENT = 1
DEFAULT CHARSET = utf8 COMMENT ='AT transaction mode undo table';
  1. Run test-service,test-client in that order.

  2. See if the list of services in nacos is as shown below.

! 20191203132351

Summary

The docker deployment of nacos and seata has been completed, for more details I would like you to visit the following address to read the detailed documentation

nacos official website

dubbo official website

seata website

docker official website

· 6 min read

Project address

This article was written by FUNKYE (Chen Jianbin), the main programme of an Internet company in Hangzhou.

Preface

The last release of the direct connection method of seata configuration, you can see the details of this blog

We then go on the basis of the previous article to configure nacos to do configuration centre and dubbo registry.

Preparation

  1. First of all, go to the nacos github to download the latest version

!

  1. after the download, very simple, unzip to the bin directory to start on it, see as shown in the picture on it:

!

  1. start finished visit:http://127.0.0.1:8848/nacos/#/login

!

Did you see this interface? Enter nacos (account password is the same), go in and take a look.

At this time you can find that there is no service registration

! 20191202204147

Don't worry, let's get the seata service connected.

Seata configuration

  1. Go to seata's conf folder and see this ?

See this folder?

That's it, edit it: !

! 20191202204353

! 20191202204437

  1. Then remember to save it! Next we open the registry.conf file to edit it:
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"

nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}

config {
# file、nacos 、apollo、zk、consul、etcd3
type = "nacos"

nacos {
serverAddr = "localhost"
namespace = ""
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}

After all the editing, we run nacos-config.sh, and the content of our configured nacos-config.txt is sent to nacos as shown in the figure:

! 20191202205743

The appearance of the above similar code is an indication of success, then we log in to the nacos configuration centre to view the configuration list, the appearance of the list as shown in the figure shows that the configuration is successful:

! 20191202205912

see it, your configuration has all been committed, if then git tool run sh does not work, try to edit the sh file, try to change the operation to the following

for line in $(cat nacos-config.txt)

do

key=${line%%=*}
value=${line#*=}
echo "\r\n set "${key}" = "${value}

result=`curl -X POST "http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=$key&group=SEATA_GROUP&content=$value"`

if [ "$result"x == "true"x ]; then

echo "\033[42;37m $result \033[0m"

else

echo "\033[41;37 $result \033[0m"
let error++

fi

done


if [ $error -eq 0 ]; then

echo "\r\n\033[42;37m init nacos config finished, please start seata-server. \033[0m"

else

echo "\r\n\033[41;33m init nacos config fail. \033[0m"

fi
  1. At present, our preparations are all complete, we go to seata-service/bin to run the seata service it, as shown in the figure on the success!

! 20191202210112

Debugging

  1. first springboot-dubbo-mybatsiplus-seata project pom dependency changes, remove zk these configurations, because we use nacos to do the registry.
   <properties>
<webVersion>3.1</webVersion>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<HikariCP.version>3.2.0</HikariCP.version>
<mybatis-plus-boot-starter.version>3.2.0</mybatis-plus-boot-starter.version>
</properties>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.8.RELEASE</version>
</parent>
<dependencies>
<dependency>
<groupId>com.alibaba.nacos</groupId>
<artifactId>nacos-client</artifactId>
<version>1.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-registry-nacos</artifactId>
<version>2.7.4.1</version>
</dependency>
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-spring-boot-starter</artifactId>
<version>2.7.4.1</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.60</version>
</dependency>
<!-- <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId>
<version>7.0</version> <scope>provided</scope> </dependency> -->
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger2</artifactId>
<version>2.9.2</version>
</dependency>
<dependency>
<groupId>io.springfox</groupId>
<artifactId>springfox-swagger-ui</artifactId>
<version>2.9.2</version>
</dependency>

<!-- mybatis-plus begin -->
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>${mybatis-plus-boot-starter.version}</version>
</dependency>
<!-- mybatis-plus end -->
<!-- https://mvnrepository.com/artifact/org.projectlombok/lombok -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-all</artifactId>
<version>0.9.0.1</version>
</dependency>
<!-- <dependency> <groupId>com.baomidou</groupId> <artifactId>dynamic-datasource-spring-boot-starter</artifactId>
<version>2.5.4</version> </dependency> -->

<!-- <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-generator</artifactId>
<version>3.1.0</version> </dependency> -->
<!-- https://mvnrepository.com/artifact/org.freemarker/freemarker -->
<dependency>
<groupId>org.freemarker</groupId>
<artifactId>freemarker</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/com.alibaba/druid-spring-boot-starter -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.20</version>
</dependency>
<!-- 加上这个才能辨认到log4j2.yml文件 -->
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-yaml</artifactId>
</dependency>
<dependency> <!-- 引入log4j2依赖 -->
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId>
<version>2.11.0</version> </dependency> -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
</dependencies>

  1. Then change the directory structure of test-service, delete the configuration of zk and change the application.yml file, directory structure and code.
server:
port: 38888
spring:
application:
name: test-service
datasource:
type: com.alibaba.druid.pool.DruidDataSource
url: jdbc:mysql://127.0.0.1:3306/test?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC
driver-class-name: com.mysql.cj.jdbc.Driver
username: root
password: 123456
dubbo:
protocol:
loadbalance: leastactive
threadpool: cached
scan:
base-packages: org。test.service
application:
qos-enable: false
name: testserver
registry:
id: my-registry
address: nacos://127.0.0.1:8848
mybatis-plus:
mapper-locations: classpath:/mapper/*Mapper.xml
typeAliasesPackage: org.test.entity
global-config:
db-config:
field-strategy: not-empty
id-type: auto
db-type: mysql
configuration:
map-underscore-to-camel-case: true
cache-enabled: true
auto-mapping-unknown-column-behavior: none
20191202211833

3.then change the registry.conf file, if your nacos is another server, please change it to the corresponding ip and port.

 registry {
type = "nacos"
file { name = "file.conf
name = "file.conf"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
}
config {
type = "nacos"
file { name = "file.conf
name = "file.conf"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
}
  1. Next, we run provideApplication

! 20191202212000

The startup is successful, and we look at the seata logs: !

[20191202212028 20191202212028

Success, this time we are the same, to modify the contents of the test-client, first of all the same application.yml, zk replaced by nacos, here will not describe in detail, the test-service within the registry.conf, copy to the client project resources to cover the original registry.conf.

Then we can run clientApplication: !

! 20191202212114

  1. Confirm that the service has been published and test that the transaction is running correctly

! 20191202212203

The service is successfully published and consumed. Now let's go back to swagger and test the rollback to see if everything is ok, visit http://127.0.0.1:28888/swagger-ui.html

! 20191202212240

Congratulations, see this must be as successful as me!

Summary

About the use of nacos and seata simple build has been completed, more detailed content hope you visit the following address to read the detailed documentation

nacos official website

dubbo official website

seata official website

· One min read

Event Introduction

Highlight Interpretation

Guest Speakers

  • Ji Min (Qing Ming) "Seata Past, Present, and Future" slides

  • Wu Jiangke "My Open Source Journey with SEATA and SEATA's Application in Internet Healthcare Systems" slides

    1577282651

  • Shen Haiqiang (Xuan Yi) "Essence of Seata AT Mode" slides

    1577282652

  • Zhang Sen "Detailed Explanation of TCC Mode in Distributed Transaction Seata"

    1577282653

  • Chen Long (Yiyuan) "Seata Long Transaction Solution Saga Mode"

    1577282654

  • Chen Pengzhi "Seata Practice in Didi Chuxing's Motorcycle Business" slides

    1577282655

Special Awards