Deep reinforcement learning has attracted attention for its application in deep Q-networks (DQNs). A DQN can attain superhuman performance, but it requires a large number of trial-and-error searches. To reduce the number of trial-and-error searches required for learning convergence in a DQN, multistep learning can be used and deep Q-networks with profit sharing (DQNwithPS) is one solution, but it has its own set of disadvantages. DQNwithPS optimizes a neural network by learning based on a DQN and profit sharing. However, multistep learning requires proper prefetching parameter tuning, and DQNwithPS has a learning performance degradation problem caused by profit sharing by not considering the expected rewards of a future episode. In this paper, we propose a learning-accelerated DQN combining multistep learning and DQNwithPS to cancel each disadvantage. The proposed method improves the prefetching parameter tuning in multistep learning by DQNwithPS and the learning performance degradation problem in DQNwithPS by multistep learning. By this method, we aim to reduce the number of trial-and-error searches compared to a DQN and DQNwithPS and to realize a manageable fast learning method.