We have seen in Part 1 of this tutorial how to read the title tags of an html file. Now we will develop a script for reading all the title tags of the files present inside a directory. The basic script remains same and we will only be keeping this basic script within a while loop. This while loop will list all the files present inside a directory.
You must read how in part 1 we have developed the code to handle a file in read mode and collect the text between title tags. Also read how the directory handler works to display all the files.
Here is the code to handle the directory listing.
$path="../dir-name/";// Right your path of the directory
$handle=opendir($path);
while (($file_name = readdir($handle))!==false) {
We can open files of particular type by using its extensions. Here we will use one if condition to add or exclude different types of files. ( read more on stristr())
if(stristr($file_name,".php")){ // read the file now }
Rest of the code is same as part 1. So here is the complete code.
<?
///////////////function my_strip///////////
function my_strip($start,$end,$total){
$total = stristr($total,$start);
$f2 = stristr($total,$end);
return substr($total,strlen($start),-strlen($f2));
}
/////////////////////End of function my_strip ///
///////////// Reading of file content////
$i=0;
$path="../dir-name/";// Right your path of the file
$handle=opendir($path);
while (($file_name = readdir($handle))!==false) {
if(stristr($file_name,".php")){
$url=$path.$file_name;
$contents="";
$fd = fopen ($url, "r"); // opening the file in read mode
while($buffer = fread ($fd,1024)){
$contents .=$buffer;
}
/////// End of reading file content ////////
//////// We will start with collecting title part ///////
$t=my_strip("<title>","</title>",$contents);
echo $t;
echo "<br>";
$i=$i+1;
}
}
echo $i;
?>
Article Source
Thursday, October 30, 2008
Searching for All titles inside a directory of a page using PHP
Subscribe to:
Post Comments (Atom)
Reader's kind attention....The articles contained in this blog can be taken from other web sites, as the main intention of this blog is to let people get all sides of the web technologies under the single roof..so if any one finds duplication or copy of your articles in this blog and if you want that to be removed from this ..kindly inform me and i will remove it...alternatively if you want me to link back to your site with the article...that can also be done...
Thanks,
Webnology Blog Administrator
Thanks,
Webnology Blog Administrator
0 comments:
Post a Comment