Dr.Elephant mysql connection error
This is the first time I try to use english to write my blog, so don't jeer at the mistake of my grammar and spelling.
Because of multi threaded drelephant will cause JobHistoryServer’s Loads very high, so I stopped it for a strench of time. Until last week, a period pull from JHS patch merge request from github was released. I re-compiled dr. elephant and deploy the new dr. elephant on the cluster. It seems stable, but on this Monday morning, my leader told me that there were no more counters and any information about cluster jobs in dr. elephant. So I logged in to the server, and check log, then I found this message below.
[error] c.j.b.ConnectionHandle - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000
And then, I found two things will cause this issue, one of them is selinux config is set to enforcing, change this config value to disabled and reboot the server, it seem good, but still got same error, only not too many.
Then I review the code of dr. elephant, I find out that in mysql create table in mysql initializing script of play framework has an issue. the index is too long, I change the index to 250 previously, but if an url is longer then 250, it gone wrong.
create index yarn_app_result_i4 on yarn_app_result (flow_exec_id(250)); create index yarn_app_result_i5 on yarn_app_result (job_def_id(250)); create index yarn_app_result_i6 on yarn_app_result (flow_def_id(250));
So I delete this index limitation, and rewrite the sql like this, add a innodb_large_prefix and row_format=dynamic to the table creation script, and finally, no more error log appears…
SET GLOBAL innodb_file_format=Barracuda; SET GLOBAL innodb_large_prefix = ON; CREATE TABLE yarn_app_result ( id VARCHAR(50) NOT NULL COMMENT 'The application id, e.g., application_1236543456321_1234567', name VARCHAR(100) NOT NULL COMMENT 'The application name', username VARCHAR(50) NOT NULL COMMENT 'The user who started the application', queue_name VARCHAR(50) DEFAULT NULL COMMENT 'The queue the application was submitted to', start_time BIGINT UNSIGNED NOT NULL COMMENT 'The time in which application started', finish_time BIGINT UNSIGNED NOT NULL COMMENT 'The time in which application finished', tracking_url VARCHAR(255) NOT NULL COMMENT 'The web URL that can be used to track the application', job_type VARCHAR(20) NOT NULL COMMENT 'The Job Type e.g, Pig, Hive, Spark, HadoopJava', severity TINYINT(2) UNSIGNED NOT NULL COMMENT 'Aggregate severity of all the heuristics. Ranges from 0(LOW) to 4(CRITICAL)', score MEDIUMINT(9) UNSIGNED DEFAULT 0 COMMENT 'The application score which is the sum of heuristic scores', workflow_depth TINYINT(2) UNSIGNED DEFAULT 0 COMMENT 'The application depth in the scheduled flow. Depth starts from 0', scheduler VARCHAR(20) DEFAULT NULL COMMENT 'The scheduler which triggered the application', job_name VARCHAR(255) NOT NULL DEFAULT '' COMMENT 'The name of the job in the flow to which this app belongs', job_exec_id VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A unique reference to a specific execution of the job/action(job in the workflow). This should filter all applications (mapreduce/spark) triggered by the job for a particular execution.', flow_exec_id VARCHAR(255) NOT NULL DEFAULT '' COMMENT 'A unique reference to a specific flow execution. This should filter all applications fired by a particular flow execution. Note that if the scheduler supports sub- workflows, then this ID should be the super parent flow execution id that triggered the the applications and sub-workflows.', job_def_id VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A unique reference to the job in the entire flow independent of the execution. This should filter all the applications(mapreduce/spark) triggered by the job for al l the historic executions of that job.', flow_def_id VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A unique reference to the entire flow independent of any execution. This should filter all the historic mr jobs belonging to the flow. Note that if your scheduler supports sub-workflows, then this ID should reference the super parent flow that triggered the all the jobs and sub-workflows.', job_exec_url VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A url to the job execution on the scheduler', flow_exec_url VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A url to the flow execution on the scheduler', job_def_url VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A url to the job definition on the scheduler', flow_def_url VARCHAR(800) NOT NULL DEFAULT '' COMMENT 'A url to the flow definition on the scheduler', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC; create index yarn_app_result_i1 on yarn_app_result (finish_time); create index yarn_app_result_i2 on yarn_app_result (username,finish_time); create index yarn_app_result_i3 on yarn_app_result (job_type,username,finish_time); create index yarn_app_result_i4 on yarn_app_result (flow_exec_id); create index yarn_app_result_i5 on yarn_app_result (job_def_id); create index yarn_app_result_i6 on yarn_app_result (flow_def_id); create index yarn_app_result_i7 on yarn_app_result (start_time); CREATE TABLE yarn_app_heuristic_result ( id INT(11) NOT NULL AUTO_INCREMENT COMMENT 'The application heuristic result id', yarn_app_result_id VARCHAR(50) NOT NULL COMMENT 'The application id', heuristic_class VARCHAR(255) NOT NULL COMMENT 'Name of the JVM class that implements this heuristic', heuristic_name VARCHAR(128) NOT NULL COMMENT 'The heuristic name', severity TINYINT(2) UNSIGNED NOT NULL COMMENT 'The heuristic severity ranging from 0(LOW) to 4(CRITICAL)', score MEDIUMINT(9) UNSIGNED DEFAULT 0 COMMENT 'The heuristic score for the application. score = severity * number_of_tasks(map/reduce) where severity not in [0,1], otherwise score = 0', PRIMARY KEY (id), CONSTRAINT yarn_app_heuristic_result_f1 FOREIGN KEY (yarn_app_result_id) REFERENCES yarn_app_result (id) ); create index yarn_app_heuristic_result_i1 on yarn_app_heuristic_result (yarn_app_result_id); create index yarn_app_heuristic_result_i2 on yarn_app_heuristic_result (heuristic_name,severity); CREATE TABLE yarn_app_heuristic_result_details ( yarn_app_heuristic_result_id INT(11) NOT NULL COMMENT 'The application heuristic result id', name VARCHAR(128) NOT NULL DEFAULT '' COMMENT 'The analysis detail entry name/key', value VARCHAR(255) NOT NULL DEFAULT '' COMMENT 'The analysis detail value corresponding to the name', details TEXT COMMENT 'More information on analysis details. e.g, stacktrace', PRIMARY KEY (yarn_app_heuristic_result_id,name), CONSTRAINT yarn_app_heuristic_result_details_f1 FOREIGN KEY (yarn_app_heuristic_result_id) REFERENCES yarn_app_heuristic_result (id) ); create index yarn_app_heuristic_result_details_i1 on yarn_app_heuristic_result_details (name);
At last I send a pull request to linkedin on github.com…

低调大师中文资讯倾力打造互联网数据资讯、行业资源、电子商务、移动互联网、网络营销平台。
持续更新报道IT业界、互联网、市场资讯、驱动更新,是最及时权威的产业资讯及硬件资讯报道平台。
转载内容版权归作者及来源网站所有,本站原创内容转载请注明来源。
- 上一篇
如何做SQL Server性能测试?
对于DBA来讲,我们都会做新服务器的性能测试。我会从TPC的基准测试入手,使用HammerDB做整体性能评估(前身是HammerOra),跟厂商数据对比。再使用DiskSpd针对性的测试磁盘IO性能指标(前身是SQLIO),再到SQLIOSIM测试存储的完整性,再到ostress并发压力测试,对于数据库服务器迁移,我们还会收集和回放Profiler Trace,并收集期间关键性能计数器做对比。 下面我着重谈谈使用HammerDB的TPC-C来做SQL Server基准测试。 自己写负载测试代码很困难 为了模拟数据库的负载,你想要有多个应用程序用户和混合数据读写的语句。你不想总是对单一行更新相同的值,或者只是重复插入假的值。 自己动手使用Powershell、C#等语言写负载测试脚本也不是不可能,只是太消耗时间,你需要创建或者恢复数据库,并做对应的测试。 免费而简单的压测SQL Server:使用HammerDB模拟OLTP数据库负载 HammerDB是一个免费、开源的工具,允许你针对SQL Server、Oracle、MySQL和PostgreSQL等运行TPC-C和TPC-H基准测试...
- 下一篇
SQL Server虚拟化系列(3)——构建理想的基于VMware的SQL Server虚拟机
虚拟化变得越来越常见,并且在不了解虚拟化如何工作的情况下,DBA在尝试解决性能问题时会出现盲点,例如减少资源争用或改进备份和恢复操作等。 在本文中我们将主要讲述为您的SQL Server工作负载构建理想的基于VMware的虚拟机。我们的下一篇文章将介绍怎么样在Hyper-V上构建对应的SQL Server虚拟化环境。 现在,作为DBA,您可能没有访问权限来创建用于SQL Server的新虚拟机。这些操作可以交给您的VM管理员,他们将为您部署合适的VM环境。 以下详细信息适用于在Windows Server 2012R2操作系统和VMware vSphere 5.5平台上使用vSphere Web Client部署SQL Server 2014。假定VMware环境已配置,已启动且运行正常。SQL Server,Windows和VMware的其他版本可以根据这些说明进行调整。 虚拟机的创建 VMware的管理系统称为vCenter Server。它在VMware环境中主要用于监视和、创建、操作虚拟机。对系统的访问由管理员提供,可通过传统桌面客户端或vCenter Server W...
相关文章
文章评论
共有0条评论来说两句吧...
文章二维码
点击排行
推荐阅读
最新文章
- Eclipse初始化配置,告别卡顿、闪退、编译时间过长
- SpringBoot2编写第一个Controller,响应你的http请求并返回结果
- SpringBoot2整合MyBatis,连接MySql数据库做增删改查操作
- Windows10,CentOS7,CentOS8安装Nodejs环境
- SpringBoot2初体验,简单认识spring boot2并且搭建基础工程
- Linux系统CentOS6、CentOS7手动修改IP地址
- CentOS6,7,8上安装Nginx,支持https2.0的开启
- Docker安装Oracle12C,快速搭建Oracle学习环境
- Jdk安装(Linux,MacOS,Windows),包含三大操作系统的最全安装
- 2048小游戏-低调大师作品