-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] Can a model be used in environments with different observation_space sizes? #2031
Comments
Hello, |
I see, thanks a lot for replying, I'll try to see if I can hold this. Otherwise I'm afraid I have to implement the whole work without sb3 unfortunately. Still, very appreciate for your excellent work! |
Hi there, it comes to me that since the policy network actually has the same structure for both env_small and env_big, does it make any sense if I create model_A using env_small and model_B using env_big, and then; model_A = PPO("MultiInputPolicy", env_small, policy_kwargs=policy_kwargs, verbose=1) # env_small to train
model_B = PPO("MultiInputPolicy", env_big, policy_kwargs=policy_kwargs, verbose=1) # env_big to test
model_B.policy.load_state_dict(model_A.policy.state_dict()) # transfer model_A's policy settings to model_B
model_B.predict(obs_B) # then I can deal with env_big using model_B with model_A's policy network I've tried this and seems it's worked. However, when it comes to saving and loading, I have to: model_A = PPO.load("model_A.zip")
model_B = PPO("MultiInputPolicy", env_big, policy_kwargs=policy_kwargs, verbose=1)
model_B.policy.load_state_dict(model_A.policy.state_dict()) It looks a little hacky and I have to create model_A first to get its state_dict even though I don't really need it, so I wonder if I can only save and load model_A's state_dict instead of the whole model? I've found that the old sb seems has this feature(hill-a/stable-baselines#344), does sb3 also support this? |
looks fine,
|
Hi. Where you able to arrive to the solution? Did you find an alternative way of doing this?
In this idea do you train the model both on A and B? or do you just used the trained model_A and test on model_B? |
Like I said, my way is to transfer
I just used the trained model_A and test on model_B. When it comes to training on different models, my idea is still the same, which means creating models for different environments, and synchronizing their The key is to be clear that although the model has size checking, the policy network doesn't. I haven't found any "normal" solution to handle this situation. Will be glad to know if you have any further progress. |
Your idea works!!! Thank you. |
❓ Question
I am trying to use stablebaselines3 to handle a graph-related problem. Graphs of different sizes will have different numbers of nodes and edges, resulting in different observation space sizes of the environment defined based on this.
My goal is to train an agent using environment A and then use it in environment B. I have customized a feature extractor using a graph neural network and it should be able to handle inputs of graphs of uncertain sizes.
However, when I try to input the observation generated by environment B into the model, an error occurs:
Is there a way to handle this problem?
Checklist
The text was updated successfully, but these errors were encountered: