In the era of big data, there are abundant of data available across many different data sources in various formats. “Broad Learning” is a new type of learning task, which focuses on fusing multiple large-scale information sources of diverse varieties together and carrying out synergistic data mining tasks across these fused sources in one unified analytic. Great challenges exist on “Broad Learning” for the effective fusion of relevant knowledge across different data sources, which depend upon not only the relatedness of these data sources, but also the target application problem. In this talk we examine how to fuse heterogeneous information to improve mining effectiveness over mobile applications.
With the rapid development of multi-platform, multi-sensor technologies, the multi-platform multi-sensor collaborative information acquisition system architecture has been developed and applied in the spatial information acquisition, and the demand for big data processing is also increasing. How to associate and fuse these data? How to get more valuable information from large-scale collaborative system? How to manage big data resources more effectively? We are given many scientific questions. In the face of current scientific issues in the acquisition and processing of big data, a preliminary analysis has been conducted. Hope these issues will be supported by the big data industry and further explored.
Prof. Jing Chen is an expert in communications and information systems. He is a doctoral supervisor in an institute. He has hosted a number of large-scale system engineering. Related technologies have been widely used in satellite ground applications such as resources, oceans, meteorology and earth observation. He has won the 1st Award of the National Prize for Progress in Science and Technology twice and the 2nd Award once. In 2005, he was elected as a member of the Chinese Academy of Engineering.
In computational science, the content of computation theory mainly includes computability, computational complexity, and algorithm design and analysis. This report only discusses the former two issues, and focuses on the computational complexity theory with big data: it mainly includes computation models and computation theories; the computation of P problem and parallel NC problem; the computation of NP problem and its interactive IP problem. Finally, in the conclusion, we present the inclusion relations of various complex problems and the research countermeasures for P and NP problems in the case of big data.
Guoliang Chen is Academician of Chinese Academy of Sciences and is Professor of Nanjing University of Posts and Telecommunications. He is a PhD supervisor and Honorary Dean of School of Computer Science and Technology, Nanjing University of Posts and Telecommunications. Professor Chen is also the Director of Institute of High Performance Computing and Big Data Processing, Nanjing University of Posts and Telecommunications, the Director of Academic Committee of Nanjing University of Posts and Telecommunications, the Deputy Director of the Academic Committee of the Wireless Sensor Network of Jiangsu Provincial High-tech Key Lab. He is the First National Teaching Teacher of Higher Education and enjoys national government special allowance. He received a Ph.D. degree from Xi'an Jiaotong University in 1961. At the same time, Professor He serves as part-time position of Dean of the School of Software Science and Technology, University of Science and Technology of China, Dean of School of Computer Science, Shenzhen University, Director of National High-Performance Computing Center, Director of Instructional Committee of Computer Basic Course of Higher Education Ministry, Director of International High-Performance Computing (Asia), China Computer Society Director and director of the High Performance Computing Professional Committee, etc. And Professor Chen also serves as Director of the Academic Committee of the National Key Laboratory about computer science.
His research interests mainly include parallel algorithms and high-performance computing and its applications. Professor Chen has undertaken more than 20 scientific research projects including the National 863 Plan, the National “Climbing” Plan, the National 973 Plan, and the National Natural Science Foundation of China. A number of research achievements have been widely quoted at home and abroad and reached the international advanced level. He has published more than 200 papers and published more than 10 academic works and textbooks. He won the Second Prize of National Science and Technology Progress Award, the First Prize of Science and Technology Progress Award and the Second Prize of the Ministry of Education, the First Prize of Science and Technology Progress Award of the Chinese Academy of Sciences, the Second Prize of the National Teaching Achievement, the First Prize of the Ministry of Water Resources, and the Second Progress of Anhui Province Science and Technology Progress Awards, 2009 Anhui Provincial Major Science and Technology Achievement Awards, etc. Professor Chen won the 15th anniversary of the advanced personal important contribution award of National 863 Plan, Baosteel Education Fund outstanding teacher’s special award, and the glorious title of the model worker in Anhui Province.
For years, Professor Chen has developed a complete set of parallel algorithm disciplines for “algorithmic theory-algorithm design-algorithm implementation-algorithm application” around the teaching and research of parallel algorithms. He proposed the parallel computing research method of "parallel machine architecture-parallel algorithm-parallel programming", established China's first national high-performance computing center, built a parallel research and teaching base for China's parallel algorithms, and trained more than 200 Postdoctoral, doctoral and postgraduate students. Professor Chen is the academic leader in non-numerical parallel algorithm research in China and has a certain influence and status in academic circles and education circles at home and abroad. Academician Chen first established China’s first national high-performance computing center in 1995, and successfully developed China’s first domestic high-performance general-purpose processor chip Godson single-core, four-core and eight-core, KD-50, KD-60 and KD-90 in 2007, 2009, 2012 and 2014 respectively, which provide infrastructure for cloud computing, big data processing and universal high performance computing in China.
Ever since humanity had consciousness, science, industry, and business have all played inalienable roles in the development of its civilizations and they have always been developed in a way that they reinforce each other. We believe that the same will be replayed for big data. In this talk, we will begin by reviewing how major scientific discoveries in one form or another have involves the discovery and accumulation of data and that, typically, it is through profound understanding of the data that resulted in discoveries. With this as preamble, we argue that because data are de facto representations of natural phenomena, they naturally should have their own inherent rules. With this, in today’s age of enlightenment, there is a call for the creation and development of "data science," the objective of which is to discover such rules. We then address several critical issues related to development of new data industry and business. We would argue that new data industry and business would not be possible if the fair sharing of a large amount of data is not properly realized. Legal, economical, and technical aspects of the fair sharing of big data must be properly resolved and then we can effectively create new data industry and business that benefits every human being on earth.
An internationally renowned scholar, Professor Wei Zhao is currently serving the American University of Sharjah as its Chief Research Officer. From 2008 to 2018, he served as the eighth Rector (i.e., President) of the University of Macau. Before joining the University of Macau, Professor Zhao served as the Dean of the School of Science at Rensselaer Polytechnic Institute in the U.S., Director for the Division of Computer and Network Systems in the U.S. National Science Foundation, and Senior Associate Vice President for Research at Texas A&M University. Professor Zhao completed his undergraduate studies in physics at Shaanxi Normal University, Xi'an, China, in 1977, and received his MSc and PhD degrees in Computer and Information Sciences at the University of Massachusetts at Amherst in 1983 and 1986, respectively.
An IEEE Fellow, Professor Zhao has made significant contributions in distributed computing, real-time systems, computer networks, and cyberspace security. He led the effort to define research agenda of Cyber-Physical Systems and to create the very first funding program for CPS R/D when he served as the NSF CNS Division Director in 2006. His research group has received numerous awards including the outstanding paper award from the IEEE International Conference on Distributed Computing Systems, the best paper award from the IEEE National Aerospace and Electronics Conference, an award on technology transfer from the Defense Advanced Research Program Agency, and the best paper award from the IEEE International Communication Conference. In 2011, he was named by the Chinese Ministry of Science and Technology as the Chief Scientist of the national 973 Internet of Things Project.
In recognition of his outstanding achievements in science and higher education, Professor Zhao was awarded the Lifelong Achievement Award by the Chinese Association of Science and Technology in 2005. In 2007, he was honored with the Overseas Achievement Award by the Chinese Computer Federation. Professor Zhao has been conferred honorable doctorates by twelve universities in the world and academician of the International Eurasian Academy of Sciences.
It is already true that Big Data has drawn huge attention from researchers in information sciences, policy and decision makers in governments and enterprises. A large number of fields and sectors, ranging from economic and business activities to public administration, from national security to scientific researches in many areas, involve with Big Data problems. This talk is aimed to demonstrate a close-up view using computation intelligence techniques for Big Data analysis.
Abstract: This talk starts with the context when management meets big data, in that decision making is becoming more and more data-centric and analytics-based. Next, the challenges facing research and applications for managerial decision making are discussed in terms of paradigm shift characterized by external embedding, technological enhancement, and enabled innovation. Then, an overall framework is presented, namely panoramic PAGE framework, highlighting the major themes of the NSFC grand research plan.
Guoqing Chen currently is EMC Chair Professor of Information Systems at Tsinghua University School of Economics and Management (Tsinghua SEM). He was also appointed China’s National Chang-Jiang Scholars Professor by the Ministry of Education of China (MOEC) in 2005, and recipient of the Award for Outstanding Achievements in Management by Fudan Foundation. At present, Professor Chen is chair of Steering Committee for the National Grand Research Plan on Big Data Driven Management and Decision Sciences (NSFC-National Natural Science Foundation of China).
Professor Chen serves in many important committees/societies such as member of China’s National Advisory Committee for State Informatization, chairman of the MOEC Educational Steering Committee for management science and engineering disciplines, vice-president of International Fuzzy Systems Association (IFSA), vice-chairman of China’s Information Economy Society, vice-chairman of China’s Systems Engineering Society, etc. He was founding president of China Association for Information Systems (CNAIS, 2005-2013).
Professor Chen has numerous publications worldwide and has been principal investigator for a number of important national research initiatives including Major Projects for China’s National Science Foundation (NSFC) on e-business and big data analytics. He has been leader for various international collaboration projects with Belgium, Czech, Germany, UK, USA, etc. His research and teaching interests include big data and business analytics, e-business and IT/IS management, fuzzy logic and data modeling, etc.
Information gain is a basic concept in information theory, which is successfully used as a measurement for influence degree of attributes in the forming of decision trees. Similar the algorithm of decision making, the algorithm causality analysis, has been applied in factors space for doing factors attributing. The main concept in the new algorithm is pseudo-information gain, which is order-conserving but much simple than information gain in calculation. This paper will introduce some results around pseudo-information gain. Most interesting fact is that information gain is not perfect tool in factor attributing. An improved measurement, named cross deterministic degree is suggested in the paper.
Pei-ZhuangWang received the BS degree in Mathematics from Beijing Normal University, China in 1957. He was a close friend of L. A. Zadeh, the father of the fuzzy set theory. He was a main academic leader of fuzzy mathematics and its application in China, puts forward Falling Shadow Measure, Truth Value Flow Reasoning and Factors Space theories, such as a former vice-chairman of International Fuzzy Systems Association, Currently, he is the head of College of Intelligence Engineering and Math, Liaoning Technical University, His research interests mainly include cognition mathematics applied in data science and artificial intelligence.
This talk will introduce the history and the connection between Big Data and Data science (DS) in China. Big Data has attracted a lot of attention from the scholars in the fields of Information and computer science, especially from press. But with time we may find as a theoretical discipline the big data has given way to Data science. Since data science may cover more contents for research. From the main directions for DS now it covers data mining, machine learning and big data. From the data itself it should cover data, big data, small data, experimental data and experiential data, artificial data etc... The data science is an interdisciplinary science, it not only relates to computer science, statistics, but also should connect with the information science, knowledge science and systems science.
Ms. Zhao Yue, is the executive director and general manager of Unicom big data Co., Ltd., and the chairman of the Smart Steps Digital Technology Co.,Ltd.，who graduated in computer network major from Northeastern University, acquired MBA in Fudan University and Norwegian School of Management , trained in Harvard Business School PMD management，she has long-term worked in the fields of Telecom and Internet operation innovation, focusing on the field of big data innovation applications in recent years.
Hugo Terashima-Marín holds a BSc in Computational Systems from Tecnológico de Monterrey, Campus Monterrey in 1982; MSc in Computer Science from University of Oklahoma in 1987; MSc in Information Technology and Knowledge-based Systems from University of Edinburgh in 1994; and PhD in Informatics from Tecnológico de Monterrey, Campus Monterrey in 1998.
Dr. Terashima-Marin is a Full Professor at the School of Engineering and Sciences, the Leader of the Research Group with Strategic focus in Intelligent Systems and Director of the Graduate Program in Computer Science. He is a member of the National System of Researchers, the Mexican Academy of Sciences, and the Mexican Academy of Computing. He participates as a member of the Technical and Academic Council for the Thematic Network of Applied Computational Intelligence supported by CONACyT. His research areas are computational intelligence, heuristics, metaheuristics and hyper-heuristics for combinatorial optimization, characterization of problems and algorithms, constraint handling and applications of artificial Intelligence. He has been principal investigator of projects for industry and CONACyT. He has current collaboration with research groups in the University of Nottingham, University of Striling, University of Edinburgh-Napier, University of Texas-San Antonio and University Andrés Bello in Santiago de Chile. He has published more than 70 research articles in international journals and conferences. He has supervised 5 PhD dissertations and 28 Master Thesis.
In the past, he has been Director of the MSC in Intelligent Systems, PhD in Artificial Intelligence, PhD in Information Technology and Communications, the PhD Programs, and Graduate Programs at Tecnológico de Monterrey, Campus Monterrey.
Data scientists are challenged with more and more issues of data analysis due to the exponential growth of the data to be analysed. In all this, the aim is to apply methods and algorithms that discover global properties of the data. At the same time there are types of uncertainties that have to be dealt with. It is generally known that different unsupervised machine learning clustering methods applied to the same input data source produce not the same result. Therefore in this talk a high level visual tools that provide an opportunity to analyze processed data, methods and obtained results is presented. Further, in this talk, we focus on such unsupervised machine learning methods as texts clustering and present a system that allows to tune and improve certain clustering methods, allow better interpretation of the clustered results and thus provide an opportunity to compare more efficiently the results of different clustering methods studied
In this talk, I demonstrate a Brain Informatics based systematic approach to an integrated understanding of macroscopic and microscopic level working principles of the brain by means of experimental, computational, and cognitive neuroscience studies, as well as utilizing advanced Web Intelligence (AIin the connected world) centric information technologies. I discuss research issues and challenges with respect to brain computing from three aspects of Brain Informatics studies that deserve closer attention: systematic investigations for complex brain science problems, new information technologies for supporting systematic brain science studies, and Brain Informatics studies based on Web Intelligence research needs. These three aspects offer different ways to study traditional cognitive science, neuroscience, brain and mental health, and artificial intelligence.
Ning Zhong (http://maebashi-it.org/~zhong) received the Ph.D. degree from the University of Tokyo. He is currently head of Knowledge Information Systems Laboratory, and a professor in Department of Life Science and Informatics at Maebashi Institute of Technology, Japan. He is also director and an adjunct professor in the International WIC Institute (WICI), a principle investigator of Brain Informatics Based Wisdom Service group at Beijing Advanced Innovation Center for Future Internet Technology, Beijing University of Technology.
Dr. Zhong is the founding editor-in-chief of Web Intelligence journal (IOS Press), the editor-in-chief of Brain Informatics journal (Springer Nature), the editor-in-chief of Brain Informatics & Health (BIH) book series (Springer Nature), and serves as associate editor/editorial board for several international journals and book series. Dr. Zhong is the co-founder and co-chair of Web Intelligence Consortium (WIC), chair of the IEEE Computational Intelligence Society Task Force on Brain Informatics, co-founder and steering committee co-chair of IEEE/WIC/ACM international conference on Web Intelligence (WI), and co-founder and steering committee co-chair of international conference on Brain Informatics (BI).
Many combinatorial optimization problems are NP-hard, and require significant specialized knowledge and trial-and-error to design good heuristics or approximation algorithms. Go game is a game problem. It can also be viewed as a combinatorial optimization problem. Therefore, AlohaGo can also be considered as a fast approximate solution algorithm for solving this optimization problem. Inspired by AlphaGo , data-driven approximate methods to combinatorial optimization problems can be designed. In this talk, we first briefly introduce the basic principles of AlphaGo, and then give an approximate method to the travelling salesman problem (TSP) based on the data-driven design principle of AlphaGo. Point matching is an important issue of many visual applications, such as pose estimation, target recognition, etc. In this talk, we also propose a data-driven method, that is a novel end-to-end model (Multi-Pointer Network) based on Recurrent Neural Networks (RNNs) to solve this problem.
In the era of big data, stock markets are closely connected with Internet big data fromdiverse sources. This paper makes a _rst attempt to compare the linkage between stock markets and various Internet big data collected from search engine, public media and social media, respectively. To achieve this purpose, a big data based causality testing framework is proposed, including three steps i.e., data crawling, data mining and causality testing. Taken Shanghai Stock Exchange and Shenzhen Stock Exchange as targets for stock markets; web search data, news, and microblogs as samples of Internet big data, some interesting _ndings can be obtained. (1) There is a strong bi-directional linear and nonlinear causality between stock markets and investors web search behavior, due to some similar trends and uncertain factors. (2) News sentiment from public media have Granger causality with stock markets in a bi-directional linear way, while microblog sentiment from social media have Granger causality with stock markets in a unidirectional linear way running from stock markets to microblog sentiment. (3) News sentiment can explain the changes of stock markets better than microblog sentiment for its authority. The results of this paper might provide some valuable information for both stock market investors and modelers.
Keywords： Stock markets; Internet big data; Granger causality test; Web search behavior; Investors'sentiment.
Promoting the openness and sharing of data is a key part of the national big data strategy. Existing data management techniques focus on self-governing by the internal owners, which are unsuitable for the openness of data. In this talk, I will introduce a novel model for data openness, called Self-Governing Openness of Data. Data owners manage their own data, determine the right to the use of data, make the regulation, and then open the data to the users (i.e., permit users to download data or upload application programs for using data). Such a model, namely self-governing openness of data, is important and necessary for realizing the openness of data. It would promote the openness and sharing of government data, the transaction of enterprise and personal data, the realization of national data sovereignty.
Yangyong Zhu is a Professor of Computer Science in Fudan University, Shanghai, China. He received a Ph.D. degree in Computer and Software Theory from Fudan University, China, in 1994. His research interests include data science and big data. He has published more than 100 papers. His research has been supported in part by the National High-Tech Research and Development Plan (863) of China, the National Natural Science Foun-dation of China (NSFC), and the Development Fund of Shanghai Science and Technology Commission. He is a Doctoral Supervisor in School of Computer Science, Fudan University, the director of Shanghai Key Laboratory of Data Science, Fudan University.
This talk will introduce the study of the new and innovative public governance method through using big data to evaluate city governments' annual work performance from social credit system's perspective.
Huang Wei (Wayne) Distinguished National “Qian-Ren Professor” and “Changjiang Scholar”, Director of Collaborative Research Center on China’s Eeconomic Development Reform & Evaluation as supported by National Development and Reform Commission of China (NDRC) and Xi’an Jiaotong University (XJTU). (Being also invited as a Fellow of Harvard University and a Tenured Full Professor of Ohio University, USA). He has had more than 30 years’ teaching and research experiences in research universities of America, Australia, HongKong, Singapore and China.
Wayne’s research interested areas include Computer-mediated Managerial Communication (CMC), Business Analytics, Group Support Systems （GSS), Big data management and data quality, e-government/e-commerce, IT and service outsourcing and IT/IS management. His research publications include more than 10 books published in America and Germany (including book chapters); more than 160 peer-reviewed quality journal papers published in the world’s top tier academic journals such as MIS Quarterly (MISQ) , Journal of MIS (JMIS), IEEE Transactions, European Journal of Operational Research （EJOR), Communications of ACM (CACM), ACM Transactions and European Journal of Information Systems (EJIS), and other international journals and international conference proceedings.
The citations of his publications are more than 3300 with more than 100 citations from the 10 top international journals like Management Science (MS), Operations Research (OR), MIS Quarterly (MISQ), Information Systems Research (ISR) and Journal of MIS (JMIS). The H factor is 27.
Large samples have been generated routinely from various sources. Classic statistical and analytical methods are not well equipped to analyze such large samples due to expensive computational costs.
In this talk, I will present an asympirical (asymptotic + empirical) analysis in large samples. The proposed method can significantly reduce computational costs in high-dimensional and large-scale data. We show the estimator based on the proposed methods achieves the optimal convergence rate. Extensive simulation studies will be presented to demonstrate numerical advantages of our method over competing methods. I will further illustrate the empirical performance of the proposed approach using two real data examples.
Ping Ma is a Professor of Statistics and co-directs the big data analytics lab at the University of Georgia, USA. He was Beckman Fellow at the Center for Advanced Study at the University of Illinois at Urbana-Champaign, Faculty Fellow at the US National Center for Supercomputing Applications, and a recipient of the US National Science Foundation CAREER Award. His paper won the best paper award of the Canadian Journal of Statistics in 2011. He serves on multiple editorial boards including the Journal of the American Statistical Association and Statistical Applications in Genetics and Molecular Biology. He is a fellow of the American Statistical Association.
- Financial data mining algorithm
- The establishment of financial management decision data mining service platform.
- Intelligent risk early warning system based on public opinion big data
Qing He is a professor and PhD supervisor of Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences. At the end of 2008, commissioned by the Research Institution of China Mobile, Professor He led his team, Data Mining and Machine Learning Research Group, to cooperatively complete parallel data mining platform based on cloud computing. The system is used for mining data from real world in TB-level. This innovation achieves a high efficiency low cost data mining, which makes China acquire self-owned intellectual property rights of the technique of data mining based on cloud computing. Meanwhile, that practice is also the pioneer of data mining based on cloud computing in China. Professor He as project director has completed several programs supported by the National Natural Science Fund of China and China’s National “863” programme successively, and He as well as his team has proposed a series of effective algorithms on data mining. The software programs about data mining that professor He organized have already acquired software copyrights, which are widely applied to dozens of companies across various industries, bringing considerable economic and social benefits. He won Wu Wenjun Artificial Intelligence Science and Technology Innovation Award In 2015.
Due to the data transmission cost and data privacy, traditional statistical tools cannot be directly applied to the scattered datasets. Decentralized algorithms tackle this problem by keeping the data in local nodes, and exchanging only the estimator in each optimization step, thus received significant attention recently. In this paper, we consider the problem that how to search an effective dimension reduction space when the data are scattered in different nodes. We propose a decentralized algorithm which extending the minimum average variance estimation (MAVE) method. Theoretical results show the proposed method has the same efficiency as the full sample MAVE approach, even when there exists batch effect on different nodes. Simulation study and real data examples indicate the proposed method dominates the existing distributed algorithms.
Due to the recent development or maturation of database, data storage, data capturing, and sensor technologies, huge medical and health data have been generated at hospitals and medical organizations at unprecedented speed. Those data are a very valuable resource for improving health delivery, health care and decision making and better risk analysis and diagnosis. Health care and medical service is now becoming more data-intensive and evidence-based since electronic health records are used to track individuals' and communities' health information (particularly changes). These substantially motivate and advance the emergence and the progress of data-centric health data and knowledge management research and practice.
Yanchun Zhang is a Professor and Director of Centre for Applied Informatics at Victoria University since 2004. Dr Zhang obtained a PhD degree in Computer Science from The University of Queensland in 1991. His research interests include databases, data mining, web services and e-health. He has published over 300 research papers in international journals and conference proceedings including ACM Transactions on Computer and Human Interaction (TOCHI), IEEE Transactions on Knowledge and Data Engineering (TKDE), VLDBJ, SIGMOD and ICDE conferences, and a dozen of books and journal special issues in the related areas. Dr. Zhang is a founding editor and editor-in-chief of World Wide Web Journal (Springer) and Health Information Science and Systems Journal (Springer), and also the founding editor of Web Information Systems Engineering Book Series and Health Information Science Book Series. He is Chairman of International Web information Systems Engineering Society (WISE). He was a member of Australian Research Council's College of Experts (2008-2010), and also serves as expert panel member at various international funding agencies such as National Natural Science Fund of China (NSFC), “National 1000 Talents Program” of China, the Royal Society of New Zealand Marsden Fund and National Natural Science Fund of China (NSFC). He is one of the National "Thousand Talents Program" Experts in China since 2010 (currently with Fudan University).
Co-evolving data streams could be found in many real-time applications such as in stock markets and operating theatre. This talk will present a novel perspective prediction model (PPM) which will provide a predictive analytics for co-evolving data streams. A dynamic and sequential gaming model and a graphics model will be build up to tell the story in the timeline of past, present and future.
Dr. Jing He is a professor in Nanjing University of Finance and Economics, China and Swinburne University of Technology, Australia. She was awarded a PhD degree from the Academy of Mathematics and System Science, Chinese Academy of Sciences in 2006. Prior to joining Victoria University, she worked in the University of Chinese Academy of Sciences, China during 2006-2008. She has been active in areas of Data Mining, Web service/Web search, Spatial and Temporal Database, Multiple Criteria Decision Making, Intelligent Systems, Scientific Workflow and some industry fields such as E-Health, Petroleum Exploration and Development, Water recourse Management and e-Research. She has published over 60 research papers in refereed international journals and conference proceedings, including ACM Transaction on Internet Technology (TOIT), IEEE Transaction on Knowledge and Data Engineering (TKDE), Information Systems, the Computer journal, Computers and Mathematics with Applications, Concurrency and Computation: Practice and Experience, International Journal of Information Technology & Decision Making, Applied Soft Computing, and Water Resource Management. She has received over 1.5 million Australian dollar research funding from the Australian Research Council (ARC) with ARC Early Career Researcher Award (DECRA), ARC Discovery Project, ARC Linkage Project and National Natural Science Foundation of China (NSFC) since 2008.
Dr. Song is a software engineer scientist. He obtained his PhD from School of Compute Science, University of Glasgow in 2015. Now his work mainly focuses on data science application and implementation in the area of Chinese Government Data Asset Operation. Last year he was awarded Excellent CTO of the Year 2017 by China Software Industry Association. He is also active in data science academic research and takes entrepreneurship mentor in Peking University and Beijing Institute of Technology University. He is the member of Expert of Big Data Industry Committee, registered in the Ministry of Industry and Information Technology of P.R.C. He is also the director of the research on key supporting technology of service intellectual property rights and data resource transaction service, which is funded by the Ministry of Science and Technology of the P.R.C.