跳至主要內容

L3HCTF 2021 Deepdarkfantasy

LPrincess大约 3 分钟ctfmiscai

DeepDarkFantasy

先放到python里想查看.pth文件网络结构,但打不开 根据提示: Simple XOR | 1. print 2. Bottleneck 3. input tensor == output tensor

文件应该是经过xor加密,

exp1:解密获得decrypted.pth文件

import binascii  

data = b''  
with open('encrypted.pth', 'rb') as f:  
    data = f.read()  
    #每位异0xde后保存  
    for i in range(256):  
        print(binascii.hexlify(data[:2]).decode('utf-8'))  
        if data[0]^i==0x50 and data[1]^i==0x4B:  
            print('key:', hex(i))  
            data = bytes([j ^ i for j in data])  
            # 判断文件头十六进制是否为50 4B  
            with open('decrypted.pth', 'wb') as f:  
                f.write(data)  
                break

第一个坑:文件名是model.py,否则会报错: ModuleNotFoundError: No module named 'model'

因此修改python文件名叫model.py 接下来就是通过加载网络文件,根据报错,不断添加缺少的网络结构,直到可以跑通

RuntimeError:Error(s) in loading state_dict for MyAutoEncoder: Unexpected key(s) in state_dict:"model","state_dict",提示我们存在俩多出来的键,自然想到去看一看state_dict和model中的内容,这时我们利用pycharm的debug功能单步运行,在load_state_dict函数打一个断点

这样我们就利用pytorch自带的解析器得到了整个网络的结构和所有参数,然后我们使用控制台将这些信息dump出来看一看细节

我们根据其网络结构构建需要的Autoencoder,并从题目给的解密后的文件中读取参数

出题人把一张含有flag的图片分成很多份,之后padding后送入Autoencoder进行训练,这样使得解码器具有在给定特定特征的情况下生成出flag残片的功能,因此我们可以控制d的值来控制输出

可以看出生成的图片明显分成有分界线,尝试后找到输入tensor形状为[1,1,1,1]时恰好能生成一块残片,编写代码遍历d

exp2:

import torch  
from torch.nn import *  
from torchvision import transforms  
  
 
class Encoder(Module):  
    def __init__(self):  
        super().__init__()  
        self.conv = Sequential(  
            Conv2d(1, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),  
            ReLU(),  
            MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),  
            Conv2d(16, 8, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),  
            MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),  
            ReLU(),  
            Conv2d(8, 8, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)),  
            MaxPool2d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False),  
            ReLU(),  
            MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False),  
            Flatten(start_dim=1, end_dim=-1)  
        )  
        self.fc=Linear(in_features=32, out_features=16, bias=True)  
  
    def forward(self, x):  
        x=self.conv(x)  
        x=self.fc(x)  
        return x  
  
class Decoder(Module):  
    def __init__(self):  
        super().__init__()  
        self.convt=Sequential(  
            ConvTranspose2d(1, 256, kernel_size=(1, 1), stride=(1, 1)),  
            ReLU(),  
            ConvTranspose2d(256, 256, kernel_size=(1, 1), stride=(1, 1)),  
            ReLU(),  
            ConvTranspose2d(256, 512, kernel_size=(1, 1), stride=(1, 1)),  
            ReLU(),  
            ConvTranspose2d(512, 128, kernel_size=(4, 4), stride=(4, 4)),  
            ReLU(),  
            ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(4, 4)),  
            ReLU(),  
            ConvTranspose2d(64, 32, kernel_size=(2, 2), stride=(2, 2)),  
            ReLU(),  
            ConvTranspose2d(32, 1, kernel_size=(2, 2), stride=(2, 2)),  
            Sigmoid()  
        )  
  
    def forward(self, x):  
        x = self.convt(x)  
        return x  
  
class MyAutoEncoder(Module):  
    def __init__(self):  
        super().__init__()  
        self.encoder=Encoder()  
        self.decoder=Decoder()  
  
    def forward(self, x):  
        #x=self.encoder.forward(x)  
        x=self.decoder.forward(x)  
        return x  
  
path='decrypted.pth'  
  
mymodel=MyAutoEncoder()  
mymodel.load_state_dict(torch.load(path,map_location=torch.device('cpu'))['state_dict'])  
# 遍历d
mymodel.eval()  
num=-40.0  
for i in range(7000):  
    mylist=[[[[num+0.01*i]]]]  
    # print(mylist)
    data=torch.FloatTensor(mylist)  
    pics = mymodel.forward(data)[0]  
    toPIL = transforms.ToPILImage()  
    pic = toPIL(pics[0])  
    pic.save('./pic/random'+str(i)+'.jpg')

得到pic,挑选出符合flag

通过ps软件拼接

得到flag

L3HCTF{blacncoder0sekbinaryxb1a876a}
上次编辑于:
贡献者: L-mj0