目录 [−]
有几种监控Spark应用的手段: WEB UI, metrics和外部手段
WEB接口
每个SparkContext启动一个Web UI, 缺省接口为4040,用来显示应用的有用信息, 包括:
- scheduler stages 和 tasks列表
- RDD尺寸和内存使用率的汇总
- 环境信息
- 运行中的executor的信息
使用浏览器打开 http://
注意缺省情况下这些信息只有在应用的生命周期中才能运行. 如果改变这种配置, 需要在启动应用前修改spark.eventLog.enabled为true. 这个参数配置Spark会将编码的Spark事件显示在UI上.
查看事后信息Viewing After the Fact
Spark Standalone 模式集群管理器有自己的web UI. 如果一个应用在它的生命周期中记录了日志, Standalone master web UI在应用完成后会自动渲染application UI.
如果Spark运行在 Mesos 或 YARN上, 当应用完成后仍然可以通过Spark历史服务器保存的应用历史日志重新显示这个应用的UI. 你可以通过下面的命令启动历史服务器:
|
|
当使用file-system 提供类( spark.history.provider)类时, 必须提供spark.history.fs.logDirectory 配置选项, 每个子目录代表每个应用的日志。 缺省的URL 地址为http://
环境变量 | 意义 |
---|---|
SPARK_DAEMON_MEMORY | Memory to allocate to the history server (default: 512m). |
SPARK_DAEMON_JAVA_OPTS | JVM options for the history server (default: none). |
SPARK_PUBLIC_DNS | The public address for the history server. If this is not set, links to application history may use the internal address of the server, resulting in broken links (default: none). |
SPARK_HISTORY_OPTS | spark.history.* configuration options for the history server (default: none). |
属性名 | 默认值 | 意义 |
---|---|---|
spark.history.provider | org.apache.spark.deploy.history.FsHistoryProvider | Name of the class implementing the application history backend. Currently there is only one implementation, provided by Spark, which looks for application logs stored in the file system. |
spark.history.fs.updateInterval | 10 | The period, in seconds, at which information displayed by this history server is updated. Each update checks for any changes made to the event logs in persisted storage. |
spark.history.retainedApplications | 50 | The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed. |
spark.history.ui.port | 18080 | The port to which the web interface of the history server binds. |
spark.history.kerberos.enabled | false | Indicates whether the history server should use kerberos to login. This is useful if the history server is accessing HDFS files on a secure Hadoop cluster. If this is true, it uses the configsspark.history.kerberos.principal andspark.history.kerberos.keytab . |
spark.history.kerberos.principal | (none) | Kerberos principal name for the History Server. |
spark.history.kerberos.keytab | (none) | Location of the kerberos keytab file for the History Server. |
spark.history.ui.acls.enable | false | Specifies whether acls should be checked to authorize users viewing the applications. If enabled, access control checks are made regardless of what the individual application had set forspark.ui.acls.enable when the application was run. The application owner will always have authorization to view their own application and any users specified viaspark.ui.view.acls when the application was run will also have authorization to view that application. If disabled, no access control checks are made. |
注意上面UI的表格的头部点击后可以排序,帮助你按照特定的信息排序。
Metrics
Spark有一个基于Coda Hale Metrics Library的可配置的metrics系统, 允许用户通过各种方式报告metrics, 如HTTP, JMX和 CSV文件方式 .通过$SPARK_HOME/conf/metrics.properties配置. 可以通过spark.metrics.conf指定一个自定义的配置文件. Spark metrics根据不同的组件被解耦成不同的实例。在每个实例中你可以配置多个sinks来提供报告显示。当前支持下面的报告:
- master: Spark standalone master process.
- applications: A component within the master which reports on various applications.
- worker: A Spark standalone worker process.
- executor: A Spark executor.
- driver: The Spark driver process (the process in which your SparkContext is created).
每个实例可以报告给零或多个sink. 可用的Sink在org.apache.spark.metrics.sink包中定义:
- ConsoleSink: 在控制台中显示metrics.
- CSVSink: 以CSV文件的方式定期提供报告.
- JmxSink: 以JMX方式提供.
- MetricsServlet: 以servlet方式在Spark UI中提供JSON数据.
- GraphiteSink: 发送给Graphite节点.
Spark 也支持 Ganglia sink,但它不包含在缺省的编译中, 这是由于license的原因:
GangliaSink: 发送metrics 给一个Ganglia节点或者多播组.
为了安装GangliaSink,你需要执行自定义的编译. 注意使用这个库需要往你的Spark包中引入一份LGPL-licensed 的代码. sbt用户,编译前设置SPARK_GANGLIA_LGPL 环境变量. Maven 用户, 需要设置 -Pspark-ganglia-lgpl profile. 此外修改cluster集群的 Spark build user applications 需要链接spark-ganglia-lgpl artifact.
metrics配置文件的语法可以看配置例子: $SPARK_HOME/conf/metrics.properties.template.
先进手段
一些外部工具可以用来帮助查看Spark job的性能:
- 集群监控工具如Ganglia, 可以查看整个集群的使用和资源瓶颈。 例如Ganglia dashboard可以快速显示特别的workload 是disk bound, network bound, 或 CPU bound.
- 操作系统监控工具如 dstat, iostat, 和 iotop可以提供细粒度监控.
- JVM提供的工具. jstack监控线程堆栈, jmap导出堆,jstat报告时间序列的统计, jconsole/jvisualvm 可视化显示JVM信息.
翻译自 Spark monitoring