0. 演讲 & 文章发表
alibaba/innodb-java-reader (https://github.com/alibaba/innodb-java-reader) is a java implementation to access MySQL InnoDB storage engine file directly. With the library or command-line tool, it provides some basic read-only features like examining pages, looking up record by primary key and generating page heatmap by LSN or filling rate. This project is useful for prototyping and learning MySQL. Moreover, this can be a tool to dump table data by offloading from MySQL process under some conditions.
Navi（https://github.com/neoremind/navi）is a distributed service framework that provides cluster management and high performance RPC. With Navi, you can easily build distributed applications with minimal effort to create a highly scalable architecture capable of handling remote procedure call and service registration and discovery.
Implemented in Java and Spring framework, Navi wraps ZooKeeper and uses Protostuff/Protobuf for transport to make it easy to build a cluster aware application. Navi allows you to focus your efforts on your application logic, so programming experience is very friendly with its simple XML or annotation configuration.
Fountain (https://github.com/neoremind/fountain) is a Java based toolkit for syncing MySQL binlog and provide an easy API to process/publish events.
Dynamic proxy (https://github.com/neoremind/dynamic-proxy) is a useful library for Java developers to generate proxy object. This library leverages a wide range of byte-code generation methods, including:
- – ASM
- – CGLIB
- – Javassist
- – JDK Dynamic Proxy
- – ByteBuddy
Easy-mapper（https://github.com/neoremind/easy-mapper） is a simple, light-weighted, high performance java bean mapping framework. By leveraging Javassist, easy mapper can generate mapping byte-code at runtime and load them into JVM so that classes can be reused for later mapping invocations.
The server-side is built upon netty which supports asynchronous and non-blocking io functionality, while the client-side provides a wide variety of options to communicate with server, which includes short live connection, keep-alive tcp connection, high availability and failover strategy.
This module is mainly for studying how RPC works in Spark, as people knows that Spark consists many distributed components, such as driver, master, executor, block manager, etc, and they communicate with each other through RPC. In Spark project the functionality is sealed in Spark-core module. Kraps-rpc separates the core RCP part from it, not including security and streaming download feature.
Apache Hadoop Yarn是big data领域通用的资源管理与调度平台，很多计算框架均可以跑在Yarn上，例如Mapreduce、Spark、Flink、Storm等，这些计算框架可以专注于计算本身，Yarn提供的高度抽象的接口来做集成。除了big data以外，实际一些长服务（long time running service）也可以跑在Yarn上，这个项目就是一个探索的项目，基于底层Yarn的API操作，开发了一个Demo。
SSHXCUTE is a framework. It was designed to let engineers to use Java call to execute command/script on remote Linux/UNIX system through SSH connection way, which make software testing or system deployment easier and specifically to make it easier to automate software testing and system environment deployment.
11. IBM DeveloperWorks发表文章