tag:blogger.com,1999:blog-1270287474555047904.post26152952053379196..comments2024-02-06T09:33:18.955-08:00Comments on It's a UNIX system!: The all-seeing eye of DTraceHenkishttp://www.blogger.com/profile/01676564270085723379noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-1270287474555047904.post-14492266737262756992012-01-04T10:46:40.325-08:002012-01-04T10:46:40.325-08:00"you can even see problems that does not even..."you can even see problems that does not even exist."<br /><br />Yes, it's always good to see problems that don't even exist. :)<br /><br />"Perhaps a iteration with close on the contents of /proc/${PID}/fd would have been less resource consuming in this scenario."<br /><br />That's not portable. If you want a non-portable approach, just use closefrom(). Otherwise, for portability, which is the reason tools do this, you have to brute force all fd's.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-1270287474555047904.post-39588861640253059332012-01-04T07:23:33.047-08:002012-01-04T07:23:33.047-08:00I've run into this scenario a couple of times ...I've run into this scenario a couple of times in the past. It was not uncommon in times past to request the highest fileno and then close them all, iterating from one past the highest you wanted all the way up to the largest. That used to be the only way in fact. Then at some later date the application runs out of file descriptors, so the limit is increased to 64K and now it takes a very long time just to start. <br /><br />Even better than iterating over /proc yourself is to use the closefrom(3C) function call. It does the iteration for you. Why re-invent the wheel?Anonymoushttps://www.blogger.com/profile/07832314525455096075noreply@blogger.com