Some months ago I decided to play with Mythic and to be honest, I’ve had a great experience with it, not only related to the stability of the framework but also the way the code is structured. It’s fairly easy to implement custom profiles, custom agents, custom communication (encryption/decryption) since most of the work is pretty abstracted from all non-related implementation (UI stuff for example). Not only that, but also the possibility to create your agents in any language, while having your C2 designed in python, which has a decent amount of libraries.
These reasons played a big role in moving some of my toolkit to Mythic and also adding more things to it, including a DNS profile. This post will try to describe the whole process of creating a DNS profile for Mythic in a way that it will become a lot easier for anyone to implement this capability in a custom agent.
When designing a DNS profile there is a wide range of issues we need to address. Since DNS uses UDP for communication, it does not provide a way to ensure delivery, nor it provides a way to order the packets.
These things become a lot harder to address when we consider that the packets will arrive out of order most of the time, since multi-threading may be a thing to consider in the agent-side.
Many, many issues…
Considering the mentioned issues, the DNS profile should be able to:
- Support for multiple connected implants.
- Protection against packet tampering (HMAC check in each packet).
- Packet ordering/acknowledgement.
- Support for multiple domain configuration.
- Message Caching: cache previous message to avoid DNS spikes, since some of the messages are exactly the some in case there are no tasks for the agent.
- Recover agent: Mythic has a way to recover agents by their UUID in case the Server goes down. However, the DNS profile does not transmit the UUID in the packet. It only obtains the UUID when assembling all the packets. Thus, it is necessary to create a mechanism to recover agents in case the server goes down.
- Fallback state: Supposing a situation the server goes down in the middle of a large command (e.g. register_assembly/execute_assembly), the agent would repeatedly send DNS queries while still receiving a query failure response. This would result in a continuous DNS spike in the network. Therefore, the agent should be able to go in a state where the maximum amount of threads would be decreased, while each packet would be sent with the configured delay of the agent. The fallback state should be active until a successful DNS query is received by the agent.
Packet Structure
In order to implement the Profile, it is first necessary to define the structure of the packets:

- Prefix: Defines the type of message. There are currently 2 types of messages to be sent: initialization message and communication message. The first will be sent as soon as the agent tries to communicate with the server while the second will be sent in further communication.
- Channel ID + ACK ID: This portion contains both the channel identifier and the ACK identifier. The whole portion is a HEX string of 9 characters. The first 2 HEX characters define the Channel ID (A number from 1 to 201). There is an extra section in this field: the bit_flipper (a number from 0 to 4), which defines the state of the communication and will be explained soon. The next 6 HEX characters also define a number that initially starts between 1 and 200.
- Data: The actual data.
- HMAC: the 32 bytes HMACMD5 hash of the data.
- Domain: the domain set.
- OBS: at the moment of this writing, the DNS Profile only supports DNS TXT queries.
Phase 0: Initialization
As mentioned before, there are 5states of communication between the agent and the server.
The first state is the initialization state. In this state, the agent will be responsible for defining the Channel ID and the ACK ID (again, both of them between 1 and 200). In this state, it is necessary to use the prefix for the initialization message and, the bit_flip value for the initialization (0). The structure of the query is define below (e.g. Apollo Agent).

Right after receiving the DNS response, the agent should modify its state to the next state (Agent turn), compare the channel_id in the response from the server (if the channel_id is not the same, it means there was a collision, and a new channel_id was assigned to the agent), save the received ACK/SEQ as the initial and the next value:

Phase 1: Agent turn
After this point, all the initial configurations should be set in the agent side. In this phase, the agent should start sending the packets of message. Each field in the DNS query should have a maximum of 63 letters, thus, the message could be equally divided in portions of 63 letters. With the number of packets and the initial SEQ/ACK value, it is possible to obtain the corresponding last SEQ/ACK, which will come handy in a bit.

Now comes the funny part, the agent can issue the DNS queries in threads (5–15 threads should be okay to not disrupt the server, in case we have multiple agents). How does it identify that a specific packet was received by the server? The server simply replies with the next packet it wants and that’s why there is a next_seq variable in the image above. This variable keeps track of the next packet it has to send to the server. In case the agent is using threads, they will send the packet id corresponding to: (next_seq + 0, next_seq + 1, next_seq + 2, next_seq +3, next_seq + n), n = number of threads. The next_seq will keep updating in every response with the most recent value:

Remember we mentioned about the last SEQ/ACK value? Alright, the agent will check in each DNS response if the the next_seq is bigger than the last id it has, which means the server is asking for a message that’s outside the range of messages to be sent, therefore, meaning that it already received all the messages from the agent. At this point, the agent should once again transition to its next state.
Phase 2: Message Count
It is necessary to change the bit_flip value to the corresponding phase (3). This enables the server to detect that the agent is now ready to receive the value corresponding to the number of packets. After receiving that value, it is possible to once again set the last SEQ/ACK (now from the server).

Phase 3: Server Turn
In the next phase, the agent would start receiving the messages from the server. The procedure is very similar to when the agent was sending the packets, with the difference that it will now store the packets and inspect the bit_flip received from the server in case it is signaling everything was sent.

Remember we were using the next_seq to inform the server which packet we needed (in the agent turn)? In the server turn, things are little bit different: a thread will be set to inspect the packets obtained from the server. This thread will keep checking the last packet id in a complete sequence of packets:


The next_seq value will be updated according to the first packet missing in a gap.
With all the packets from the server, the agent is able to reconstruct, decrypt and add the message to the inbox.
Phase 4: Reset
The last phase is only for the purpose of acknowledging the server that the agent successfully processed its message and is ready to start sending is contents. In this phase, only the bit_flip is set (value of 4) and the packet structure is cleared.
This summarizes the whole process the agent should accomplish in order to communicate with the DNS profile.
Extra Mile: Optimizations
Mythic 2.2 provides you a mechanism to customize the message format sent from/to your agent. By default, the messages are in a base64 format + encrypted with the UUID, which translates to a JSON containing all the instructions.
When converting the message to a HEX format (which is the format supported by the DNS profile), it gets considerably big, resulting a great amount of DNS queries in order to transmit information. This results in a unnecessary DNS spike in the network every time the agent calls home, since the content of these encrypted messages will be the same if there are no tasked commands.
This observation resulted in the possibility to implement a caching system in the DNS agent. In the initialization phase, it is valid to mention that the content in the data field will set a code for the caching mechanism. The agent is responsible for setting this code (Line 3 in the image below).

The server will save this code and whenever the agent receives it, it means that the server is signaling that it can use the last saved message again and basically add it to the inbox.
The caching mechanism works in both ways, so, if the agent perceives that the message he is about to send to the server is just the same as the last generated message, it can simply replace the whole message with the cache code. This will tell the server to use the last received pack of messages from the agent.
Extra Mile: Fallback State
During the development of the DNS profile, one of the biggest concerns was to make the connection reliable, which means avoiding any kinds of disruption between the agent and the server that would result in the implant never connecting back again.
Since we are talking about DNS communications, using loads of queries while also doing threads may result in a lot of issues and query failures. Both the profile and the Apollo implementation provide a mechanism to recover connection in case of a server failure. This means that in case the queries are not being delivered, the agent will go in a fallback state.
The agent is capable of identifying consecutive query failures and enter in a state that the maximum threads allowed will be decreased to 1, while the delay between the queries will be equivalent to the CallbackDelay configured in the agent creation.
In case the failure persists, it might mean that it is a situation that the server went boom. In the case the server is turned off, every information regarding the channel IDs and connected agents might be lost. When the server gets back on again, it will no longer be able to process the messages from agents that were already active, since it will be expecting messages in the Phase 0. So, if the server receives a query from a uninitialized agent, it will respond with a query failure too.
In this case, the agent will keep getting DNS query failures and will go in another state: recovery state. In this state, the agent will reset its configurations, go back to the initialization mode, and query the server with initialization messages.
Conclusion
In this article, we’ve tried to describe all the necessary procedures in order to implement the DNS communication to any custom agents in Mythic. With time, more DNS query types will be implemented with Host Rotation.
Resources:
https://github.com/ed-caicedo/DnsRip
Comentários