Pytorch cuda out of memory

pytorch cuda out of memory Tried to allocate 400. Oct 12, 2019 · Pytorch cuda out of memory 显存不足分析和解决 Posted by LZY on October 12, 2019 Jul 29, 2021 · RuntimeError: CUDA out of memory. Jun 08, 2020 · RuntimeError: CUDA out of memory. OS: Windows 10; PyTorch version : pytorch 1. The important point here is that the Pascal GPU architecture is the first with hardware support for virtual memory page Shared Memory Example. When code running on a CPU or GPU accesses data allocated this way (often called CUDA managed data), the CUDA system software and/or the hardware takes care of migrating memory pages to the memory of the accessing processor. I will try --gpu-reset if the problem occurs again. Desktop. 0+cu101 Model: EfficientDet-D4 When I trained it with the batch size is 1, it took 9. 4. 00 MiB 远程主机间复制文件及文件夹 Of Cuda Pytorch Out Memory Clear . 00 GiB total capacity; 557. 以上是“如何解决CUDA out of memory的问题”这篇文章的所有内容,感谢各位的阅读! Nov 02, 2021 · Tried to allocate 12. 93 GiB total capacity; 6. Tried to allocate 2. Note I am not running it on my own GPU; I am running it using the free GPU acceleration from Google Colab. 36 gib reserved in total by pytorch) Solution: GPU cache is not enough you can reduce the size of batch size properly if it works properly 1. Restart the computer Dec 16, 2020 · Yes, these ideas are not necessarily for solving the out of CUDA memory issue, but while applying these techniques, there was a well noticeable amount decrease in time for training, and helped me to get ahead by 3 training epochs where each epoch was approximately taking over 25 minutes. a) 首先检查是否是“个别实例过长”引起的 May 23, 2021 · 卸载旧版本pytorch: conda uninstall pytorch. Also. Restart the computer Oct 01, 2021 · Case 1 ¶. This basically means PyTorch torch. 01 GiB al ready allocate d; 7. 05 GiB free; 1. When the program is not called, the display memory is occupied. Tried to allocate 384. 일괄 처리를 CUDA에 반복적으로 전송하고 작은 배치 크기를 만듭니다. 96 (comes along with CUDA 10. My graph is a batched one formed by 300 subgrahs and the following total nodes and edges: Graph (num_nodes= {‘ent’: 31167}, . 80 MiB free; 2. pytorch 程序出现 cuda out of memory ,主要包括两种情况 : 1. At first I suspected that the graphics card on the server was being used, but when I got to mvidia-SMi I found that none of the three Gpus were used. 0GB. Dec 16, 2020 · Yes, these ideas are not necessarily for solving the out of CUDA memory issue, but while applying these techniques, there was a well noticeable amount decrease in time for training, and helped me to get ahead by 3 training epochs where each epoch was approximately taking over 25 minutes. 5. 1. Some utterances are larger (1 and half mint, there are many which are longer than 40 seconds) which I think is causing the issue but I need your comments on that and some are of very short time duration (1 sec, 2 sec, etc). Details: Pytorch显存充足出现CUDA error:out of memory错误 pytorch出现CUDA error:out of memory错误 Pytorch运行错误:CUDA out of memory处理过程 Out of Socket memory 错误 Android Studio 3. memory_cached() Nov 02, 2021 · Tried to allocate 12. 00 GiB total capacity; 1; pytorch 使用GPU报错 ->RuntimeError: CUDA out of memory. Nov 02, 2021 · Tried to allocate 12. Dec 02, 2020 · I am trying to write a neural network that will train on plays by Shakespeare and then write its own passages. Mar 31, 2020 · 8430. 57 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. 解决:RuntimeError: CUDA out of memory. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Jul 31, 2021 · For Linux, the memory capacity seen with nvidia-smi command is the memory of GPU; while the memory seen with htop command is the memory normally stored in the computer for executing programs, the two are different. I am assuming but not sure -> ( According to me the last network graph that is created when the last batch is trained is still stored in the cuda device. 94 GiB already allocated; 123. 68 GiB (GPU 0; 8. More specifically the function CUDAFreeHost() resulted with success code, but the memory was not de-allocated and therefore after some time, the GPU pinned memory was filled up and the SW ended up with the message "CUDA Etsi töitä, jotka liittyvät hakusanaan Runtimeerror cuda error out of memory pytorch tai palkkaa maailman suurimmalta makkinapaikalta, jossa on yli 20 miljoonaa Nov 02, 2021 · Tried to allocate 12. CUDA out of memory in jupyter using pytorch. 63 MiB fr ee ; 11. empty_cache() Above does release the majority, but not all of the memory. Nov 02, 2021 · Tried to allocate 12. 36 GiB reserved in total by PyTorch) tree_cat October 20, 2020, 10:09am Sep 21, 2021 · >>> import torch >>> torch. I came across a forum while checking GPU memory management. 66 GiB reserved in total by PyTorch) Jul 10, 2020 · RuntimeError: CUDA out of memory. Jul 06, 2021 · 我在跑pytorch的时候,显存的报错如下(真的是GPU显存全部占用完了): RuntimeError: CUDA out of memory. 烦人的pytorch gpu出错问题:RuntimeError: CUDA out of memory. memory_cached() Apr 23, 2020 · python - PyTorch에서"CUDA out of memory"를 피하는 방법. Now the same scripts that loaded models and trained, all cause CUDA out of memory errors (unless I set params to very small content much smaller than limits of gtx 1080). 00 GiB total capacity; 3. There are multiple ways to declare shared memory inside a kernel, depending on whether the amount of memory is known at compile time or at run time. 처음에 모든 데이터를 CUDA로 한 번에 보내지 마십시오. Problem description Pytorch, GPU memory is clear enough, why is it wrong? First, it may be rea Aug 05, 2021 · RuntimeError: CUDA out of memory. CUDA is limited to use a single CI and will pick the first one available if several of them are visible. 1 Pytorch version: 1. 0 instead of the newest version. This usually happens when CUDA Out of Memory exception happens, but it can happen with any exception. Again, the previous solution didn’t work for me, as I was moving data to cuda. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Sep 21, 2021 · >>> import torch >>> torch. Tried to allocate 9. I am using pytorch. Tried to allocate 11. 5 GiB GPU RAM, then I tried to increase the batch size and it returned: # Batch_size = 2 CUDA out of memory. I am posting the solution as an answer for others who might be struggling with the same problem. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF How to avoid "CUDA out of memory" in PyTorch Send the batches to CUDA iteratively, and make small batch sizes. Данные nvidia-smi: На компе стоят 2 x NVIDIA GeForce GTX 1080 Ti. 0, an NVIDIA Quadro V100 GPU with 32 GiB of RAM, and an AMD Ryzen 7 3700X CPU. I have some custom dataset which is about 40 hours voice data. The fact that training with TensorFlow 2. no_grad(): fore testing the code. 安装pytorch2. My problem: Cuda out of memory after 10 iterations of one epoch. I got trouble with memory. (It made me think that after an iteration I lose track of cuda variables which surprisingly were not collected by garbage collector) Solution: Delete cuda variables manually (del variable_name) after each iteration. 32 + Nvidia Driver 418. randn(100, 10000, device=1) for i in range(100): l = torch. RuntimeError:* CUDA out of memory. 88 MiB free; 3. Dec 05, 2020 · pytorch程序出现cuda out of memory,主要包括两种情况: 1. 25 GB is allocated and how can I free it so that it’s available to my CUDA program dynamically. you need to make sure to empty GPU MEM. 74 GiB free; 1. Jan 12, 2021 · RuntimeError: CUDA out of memory. empty_cache() is not clearing the allocated memory. 0,按照官网上的办法,我的CUDA版本是9. Apr 05, 2018 · Pytorch 0. 06 MiB free; 37. 00 MiB (GPU) 1. 90 GiB total capacity; 14. The Adroit node adroit-h11g1 has 770 GB of RAM, 40 CPU-cores and four V100 GPUs. 00 MiB (GPU 0; 15. Tensorflow and pytorch have that property. 74 GiB already allocated; 7. Apr 30, 2019 · Before running the training loop, I tried printing out the GPU memory usage to see how it looks, the numbers are: cuda:0 6. 65 GiB already allocated; 238. so plenty of room. 50 GiB already allocated; 1. 00 MiB (GPU 0; 3. Linear(m, n) in PyTorch - use O(nm) memory: that is to say, the memory requirements scale quadratically with the number of features. Tried to allocate 978. Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. nn. 5 nvidia-smi. Jul 07, 2021 · pytorch Runtime error:CUDA RUN OUT OF MEMORY created at 07-07-2021 views: 4 I believe that most of the friends who use pytorch to run programs have encountered this problem on the server: run out of memory , in fact, it means that there is not enough memory. 在开始 运行 时即出现,解决方法有 : a)调小batchsize b)增大GPU现存(可加并行处理) 2. 00 MiB (GPU 0; 31. Hi, I have several scripts using tensorflow, pytorch, etc leveraging CUDA/cdnn. 36 GiB already allocated; 888. 64 GiB reserved in total by PyTorch) Expected behavior:? How it's possible that PyTorch required this huge amount of memory? Does It is normal or I made stuff wrong? I am working on implementing UNet for image segmentation using Pytorch. Those would prevent the allocated memory from being freed by deleting the tensors. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Aug 17, 2020 · The same Windows 10 + CUDA 10. 17 GiB total capacity; 10. Tried to allocate 24. 22 GiB res er v ed in to t al by PyTorch ) 可能的原因是: ① 在循环训练中累积历史 Nov 02, 2021 · Tried to allocate 12. ogggcar July 29, 2021, 9:42am #1. 19 MiB free; 5. Jan 17, 2020 · When trying to interpolate these large frame sizes in DainApp and get an out of memory message, you need to turn on the "Split Frames" option under the "Fix OutOfMemory Options" Tab. 492 program. Jun 16, 2020 · del tensor_variable_name to clear GPU memory and torch. 00 GiB total capacity; 594. 81 MiB already allocated; 2. 00 GiB total capacity; 5. 091384410858154GB. rand(1). Jul 02, 2021 · I try to train two DNN jointly, The model is trained and goes to the validation phase after every 5 epochs, the problem is after the 5 epochs it is okay and no problem with memory, but after 10 epochs the model complains about Cuda memory. 71 GiB reserved in total by PyTorch) => The batch size of data is too large to fit in the GPU. Tried to allocate 1. Challenge By adding additional layers, work out how deep you can make your network before running out of GPU memory when using a batch size of 32. 在 运行 过程中出现,特别是 运行 了很长时间后爆显存了。. 55 MiB free; 1. cuda(0) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. 28 MiB cached) PytorchRuntimeError:CUDA out of memory. 00 MiB (GPU 0; 14. class Trainer(BaseTrainer): def __init__(self, config, resume: bool Feb 27, 2021 · RuntimeError: CUDA out of memory occurs using the PyTorch training model. I got most of the notebook to run by playing with batch size, clearing cuda cache and other memory management. This memory is cached so that it can be quickly allocated to new tensors being allocated without requesting the OS new extra memory. 在开始运行时即出现,解决方法有 : a)调小batchsize b)增大GPU现存(可加并行处理) 2. cuda. Tried to allocate 16. 61 GiB reserved in total by PyTorch) Apr 03, 2020 · Memory Leakage with PyTorch. 03 GiB cached) There are some troubleshoots. 04 PC with CUDA 11. Oct 12, 2019 · Pytorch cuda out of memory 显存不足分析和解决 Posted by LZY on October 12, 2019 CUDA out of memory代表GPU的内存被全部分配出去,无法再分配更多的空间,因此内存溢出,出现这个错误。 如果我们的代码本身没有问题,那么为了解决这个错误,我们要么在训练阶段减小batch size,要么在翻译阶段做… Apr 10, 2020 · pytorch出现CUDA error:out of memory错误问题描述解决方案问题描述模型训练过程中报错,提示CUDA error:out of memory。解决方案判断模型是否规模太大或者batchsize太大,可以优化模型或者减小batchsize;比如:已分配的显存接近主GPU的总量,且仍需要分配的显存大于缓存(306M&gt;148. Jan 26, 2019 · This thread is to explain and help sort out the situations when an exception happens in a jupyter notebook and a user can’t do anything else without restarting the kernel and re-running the notebook from scratch. 91 GiB reserved in total by PyTorch) При обучении никаких ошибок не было. 00 GiB (GPU 0; 7. 0, the learning rate scheduler was expected to be called before the optimizer’s update; 1. 00 MiB (GPU 0; 6. 25 GiB already allocated; 22. 92 GiB to t al ca pacity; 11. let’s check your GPU & all mem. 00 MiB 远程主机间复制文件及文件夹 out = resnet18(data. Training: Due to the limited GPU video memory resources, the batchsize of training input should not be too large, which will lead to Out of Memory errors. 1+cu111. 75 GiB total capacity; 10. I upgraded to a quadro rtx 8000. Tried to allocate 4. 3M)。 Nov 02, 2021 · Tried to allocate 12. For some reason, my GPU immediately runs out of memory. torch. Leave the x=2 y=2 defaults and 150px padding as they are for now and try feeding the frames to Dain. 89 GiB already allocated; 8. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Jul 07, 2021 · pytorch Runtime error:CUDA RUN OUT OF MEMORY created at 07-07-2021 views: 4 I believe that most of the friends who use pytorch to run programs have encountered this problem on the server: run out of memory , in fact, it means that there is not enough memory. From my previous experience with this problem, either you do not free the CUDA memory or you try to put too much data on CUDA. watch -n 0. Tried to allocate 32. Check whether the video memory is insufficient, try to modify the batch size of the training, and it still cannot be solved when it is modified to the minimum, and then use the following command to monitor the video memory occupation in real time. I figured out where I was going wrong. 91 GiB already allocated; 0 bytes free; 4. Linear Nov 02, 2021 · Tried to allocate 12. Can this be related to the PyTorch and CUDA versions I’m using? I am limited to CUDA 9, so I sticked to PyTorch 1. 50 GiB; 烦人的pytorch gpu出错问题:RuntimeError: CUDA out of memory. 00 MiB 远程主机间复制文件及文件夹, lblog_ixxabuu0的个人空间. 07 GiB (GPU 0; 24. Don’t hesitate to reach out if you I am working on implementing UNet for image segmentation using Pytorch. 88 GiB (GPU 0; 10. 1 documentation. Tried to allocate 350. 64 GiB reserved in total by PyTorch) Expected behavior:? How it's possible that PyTorch required this huge amount of memory? Does It is normal or I made stuff wrong? Sep 03, 2021 · Thanks for the comment! Fortunately, it seems like the issue is not happening after upgrading pytorch version to 1. 88 MiB free; 14. You can check out the size of this area with this code: Sep 21, 2021 · >>> import torch >>> torch. 00 MiB (GPU 0; 11. import torch # Returns the current GPU memory usage by # tensors in bytes for a given device torch. 2. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Oct 20, 2020 · RuntimeError: CUDA out of memory. empty_cache() would clear the PyTorch cache area inside the GPU. CUDA out of memory. I am using Cuda and Pytorch:1. By not freeing the CUDA memory, I mean you potentially still have references to tensors in CUDA that you do not use anymore. 94 MiB free; 30. empty_cache () Then Sep 23, 2018 · To get current usage of memory you can use pyTorch's functions such as:. Don't send all your data to CUDA at once in the beginning. Shedding some light on the causes behind CUDA out of memory ERROR, and an example on how to reduce by 80% your memory footprint with a few lines of code in Pytorch. Reduce the batch size. Sep 28, 2019 · If you don’t see any memory release after the call, you would have to delete some tensors before. I tried to add this to @jeremy ’s learn. 21 GiB reserved in total by PyTorch) Expected behavior The train results to appear in my folder labeled "trainResults" Screenshots. 61 GiB already allocated; 107. 4 has a torch. summary() for cnns at the beginning and end of each hook block iteration to see how much memory was added by the block and then I was going to return the cuda memory stats, along with the other summary data. Feb 18, 2020 · Show activity on this post. Declare shared memory in CUDA C/C++ device code using the __shared__ variable declaration specifier. Let’s try another, possible, solution. 88 MiB (GPU 4; 15. Then the next problem was runtime error: CUDA out of memory. 3 runs smoothly on the GPU on my PC, yet it fails allocating memory for training only with PyTorch. Код: Feb 14, 2018 · I tried using a 2 GB nividia card for lesson 1. Oct 27, 2020 · Hi, I have a question with CUDA out of memory, I already know how to solve it, I just wonder the meaning of the bug. 00 GiB total capacity; 2. If decreasing batch size or restarting notebook does not work, check for that. 1) are both on laptop and on PC. The issue is with the CUDA memory de-allocation function, that has stopped working properly with latest NVIDIA GPU drivers. 91 GiB total capacity; 1. cuda:1 0. 在运行过程中出现,特别是运行了很长时间后爆显存了。 a) 首先检查是否是“个别实例过长”引起的,如果程序运行时 I'm not familiar with fastai but there should be dynamic memory allocation for CUDA. This function is a no-op if this argument is a negative integer. gpu, jupyter, jupyter-notebook, out-of-memory, pytorch / By Syz. 7. 66 GiB reserved in total by PyTorch) Replies: 0 | Pages: 1 out = resnet18(data. They all worked with my gtx 1080. 44 MiB free; 10. Runtimeerror: CUDA out of memory. 66 GiB reserved in total by PyTorch) Replies: 0 | Pages: 1 Jun 08, 2020 · RuntimeError: CUDA out of memory. 1; CUDA toolkit Emptying Cuda Cache. 9. 0 -c pytorch. 96 GiB reserved in total by PyTorch) I haven't found anything about Pytorch memory usage. randn(1024**3, device=‘cuda’) Traceback (most recent call last): File “”, line 1, in RuntimeError: CUDA out of memory. 00 GiB total capacity; 1. GPU: RTX 2080Ti, CUDA 10. 90 MiB cached) How can I check where is my 3. 22 GiB res er v ed in to t al by PyTorch ) 可能的原因是: ① 在循环训练中累积历史 Dec 05, 2020 · pytorch程序出现cuda out of memory,主要包括两种情况: 1. Etsi töitä, jotka liittyvät hakusanaan Runtimeerror cuda error out of memory pytorch tai palkkaa maailman suurimmalta makkinapaikalta, jossa on yli 20 miljoonaa Dec 08, 2020 · cuda_memory_resource; Because cuda_memory_resource is just a thin wrapper around cudaMalloc and cudaFree, this is a good demonstration of the level of speedup that RMM can provide. Reading other forums it seems GPU memory management is a pretty big challenge with pyTorch. allocation. 6. 00 MiB (GPU 0; 2. level 1. to("cuda:0")) # Use Data as Input and Feed to Model print(out. 81 MiB free; 10. About Clear Pytorch Memory Of Cuda Out Nov 02, 2021 · Tried to allocate 12. 76 GiB total capacity; 11. 95 GiB total capacity; 3. 00 MiB (GPU 0; 22. 06 MiB free; 41. cuda:3 0. 43 GiB free; 9. memory_allocated() function. 3 Link Prediction — DGL 0. 68 GiB already allocated; 18. 0: conda install pytorch torchvision cudatoolkit=9. I think there's a GPU memory leak problem because it raises Cuda out of … Press J to jump to the feed. 오히려 다음과 같이하십시오 : dtypes 를 사용할 수도 있습니다 적은 메모리를 사용합니다 Nov 02, 2021 · Tried to allocate 12. Sep 21, 2021 · >>> import torch >>> torch. RuntimeError: CUDA out of memory. 00 MiB (GPU 0; 2. Solution 1. If a float is 32bit, or 4 bytes, that should be 4 * 32 * 256 * 256 bytes per batch or 8388608 bytes which is only 8 MB. [deleted] Sep 03, 2021 · Thanks for the comment! Fortunately, it seems like the issue is not happening after upgrading pytorch version to 1. While PyTorch aggressively frees up memory, a pytorch process may not give back the memory back to the OS even after you del your tensors. 79 GiB total capacity; 5. The learnable parameters in a fully-connected layer - nn. 72 GiB total capacity; 29. Tried to allocate 120. **RuntimeError**: CUDA out of memory. RuntimeError: CUDA out of Jan 12, 2021 · RuntimeError: CUDA out of memory. Problem description 2. Tried to allocate 20. Any help to solve the memory issue. 75 MiB free; 5. 36 GiB reserved in total by PyTorch) tree_cat October 20, 2020, 10:09am Feb 11, 2019 · pytorch出现CUDA error:out of memory错误问题描述解决方案 问题描述 模型训练过程中报错,提示CUDA error:out of memory。 解决方案 判断模型是否规模太大或者bat ch size太大,可以优化模型或者减小bat ch size; 比如: 已分配的显存接近主GPU的总量,且仍需要分配的显存大于 Jul 23, 2020 · RuntimeError: CUDA out of memory. 28 GiB already allocated; 4. 00 MiB (GPU 0; 4. 在运行过程中出现,特别是运行了很长时间后爆显存了。 a) 首先检查是否是“个别实例过长”引起的,如果程序运行时 Nov 02, 2021 · Tried to allocate 12. I decided my time is better spent using a GPU card with more memory. Oct 20, 2020 · RuntimeError: CUDA out of memory. Is there anyway to clear the created graph) From my previous experience with this problem, either you do not free the CUDA memory or you try to put too much data on CUDA. 00 MiB (GPU 0; 7. To Reproduce Consider the following function: import torch def oom(): try: x = torch. Today, when I was running the program, I kept reporting this error, saying that I was out of CUDA memory. CUDA error:out of memory. collect() torch. I am running an evaluation script in PyTorch. 99 GiB reserved in total by PyTorch) I searched for hours trying to find the best way to resolve this. 76 GiB (GPU 0; 11. Tried to allocate 14. 38 GiB total capacity,代码先锋网,一个为软件开发程序员提供代码片段和技术文章聚合的网站。 Mar 25, 2020 · RuntimeError: CUDA out of memory. 28 GiB reserved in total by PyTorch) Sep 21, 2021 · >>> import torch >>> torch. memory_allocated() # Returns the current GPU memory managed by the # caching allocator in bytes for a given device torch. RuntimeError: CUDA out of Jul 23, 2020 · RuntimeError: CUDA out of memory. 00 MiB (GPU 0 Mar 15, 2021 · EDIT: SOLVED - it was a number of workers problems, solved it by lowering them I am using a 24GB Titan RTX and I am using it for an image segmentation Unet with Pytorch, it is always throwing Cuda out of Memory at different batch sizes, plus I have more free memory than it states that I need, and by lowering batch sizes, it INCREASES the memory it tries to allocate which doesn’t make any Nov 02, 2021 · Tried to allocate 12. Use with torch. Sep 01, 2021 · Freeing Pytorch memory is much more straightforward: del model gc. I'm not familiar with fastai but there should be dynamic memory allocation for CUDA. Feb 27, 2021 · RuntimeError: CUDA out of memory occurs using the PyTorch training model. 22 GiB already allocated; 167. 81 GiB already allocated; 27. cuda:2 0. 28 GiB reserved in total by PyTorch) Tried to allocate 20. Force windows to use all the available RAM memory: Step1: Go to Start Button and Type "Run" Step 2: In the Run Box: Type " msconfig ". 62 GiB already allocated; 145. After a long time of debugging, it turned out to be. shape) CUDA out of memory. Mar 12, 2021 · x = torch. 0. Tried to allocate 128. 75 MiB free; 13. 1 + CUDNN 7. The only way I can reliably free the memory is by restarting the notebook / python command line. The benchmarks in this section were run on an Ubuntu 18. 67 MiB already allocated; 12. When I try to increase batch_size, I've got the following error: CUDA out of memory. Solution: Reduce the batchSize to even 1. Hi everyone: Im following this tutorial and training a RGCN in a GPU: 5. 26 gib (GPU 0; 6. Pytorch clear gpu memory. Oct 09, 2019 · 🐛 Bug Sometimes, PyTorch does not free memory after a CUDA out of memory exception. pytorch cuda out of memory

n2h 586 0ew 5kc kwq 60g ivp wcf tt2 zfp phd ili tl9 jno dz1 bns ron tgm sw4 jra