Saturday, November 8, 2008

register_shutdown_function possible use cases

Eirik Hoem on his blog provides an overview of PHP’s register_shutdown_function, and suggests using it for the cases when for whatever reason your Web page ran out of memory, fatal’ed, and you don’t want to display a blank page to the users.

register_shutdown_function is also useful for command-line scripts with PHP. Pretty frequently your script has to do some task like parse a large XML file, and the test examples when it was originally written did not account for the XML file possible being huge. Therefore your script dies with like 23% completion, and you’re left with 23% of the XML file parsed. Not ideal, but a quick duct-tape-style fix, would be to introduce a register_shutdown_function call to system(), to which you pass the script itself.

If you happen to keep track of which line you’re on while parsing, you can pass the line number as the first parameter to your own script, and make it start off after that 23% mark, or wherever it died. The script then needs to be launched with 0 passed as the first parameter. It will run out of memory, die, launch register_shutdown_function, which will launch another copy of the script (while successfully shutting down the original process) with a new line number, which will repeat the process.

Again, this is a duct tape approach to PHP memory consumption issues while working with large data sets.

Read More......

24 Web site performance tips

Yahoo! Developer Network blog had an entry by Stoyan Stefanov and presentation from PHP Quebec conference. A few points to take away, in case you don’t feel like going through 76-slide presentation:

1. A drop of 100ms in page rendering time leads to 10% in sales on Amazon. A drop of 500 ms leads to 20% less traffic to Google.
2. Make fewer HTTP requests - combine CSS and JS files into single downloads. Minify both JS and CSS.
3. Combine images into CSS sprites.
4. Bring static content closer to the users. That usually means CDNs like Akamai or Limelight, but sometimes a co-location facility or data center in a foreign country is the only option.
5. Static content should have Expires: headers way into the future, so that they’re never re-requested.
6. Dynamic content should have Cache Control: header.
7. Offer content gzip’ed.
8. Stoyan claims nothing will be rendered in the browser till the last piece of CSS has been served, and therefore it’s critical to send CSS as early in the process as possible. I happen to have a document with CSS declared at the very end, and disagree with this statement - at least the content seems to render OK without CSS, and then self-corrects when CSS finally loads.
9. Move the scripts all the way to the bottom to avoid the download block - Stoyan’s example shows placing the javascript includes right before and , although it’s possible to place them even further down (well, you’d break XHTML purity, I suppose, if you declare your documents to be XHTML).
10. Avoid CSS expressions.
11. Consider placing the minified CSS and JS files on separate servers to fight browser’s default pipelining settings - not everybody has FasterFox or tweaked pipeline settings.
12. For super-popular pages consider inlining JS for fewer HTTP requests.
13. Even though placing content on external servers with different domains will help you with HTTP pipelining, don’t go crazy with various domains - they all require DNS lookups.
14. Every 301 redirect is a wasted HTTP request.
15. For busy backend servers consider PHP’s flush().
16. Use GET over POST any time you have a choice.
17. Analyze your cookies - large number of them could substantially increase the number of TCP packets.
18. For faster JavaScript and DOM parsing, reduce the number of DOM elements.
19. document.getElementByTagName(’*').length will give you the number of total elements. Look at those abusive

s.
20. Any missing JS file is a significant performance penalty - the browser will browse the 404 page you generate, trying to see if it has valid

Read More......

FirePHP for PHP and AJAX development

FirePHP is a package consisting of a Firefox extension and server-side PHP library for quick PHP development on top of Firebug. It allows you to include the PHP library, and issue logging calls like


fb('Log message' ,FirePHP::LOG);
fb(’Info message’ ,FirePHP::INFO);
fb(’Warn message’ ,FirePHP::WARN);
fb(’Error message’,FirePHP::ERROR);

This is visible only to the Firefox version that has FirePHP installed on top of Firebug. You can also dump entire array and objects to the fb() function call, and have them displayed in Firebug UI.

Read More......

PHP Top scalability mistakes

1. Define the scalability goals for your application. If you don’t know how many requests you’re shooting for, you don’t know whether you’ve built something that works, and how long it’s going to last you.
2. Measure everything. CPU usage, memory usage, disk I/O, network I/O, requests per second, with the last one being the most important. If you don’t know the baseline, you don’t know whether you’ve improved.
3. Design your database with scalability in mind. Assume you’ll have to implement replication.
John Coggeshall, CTO of Automotive Computer Services, and author of Zend PHP Certification Practice Book and PHP5 Unleashed, gave a talk at OSCON 2008 on top 10 scalability mistakes. I wasn’t there, but he posted the slides for everybody to follow. Here’re some lessons learned.

1. Define the scalability goals for your application. If you don’t know how many requests you’re shooting for, you don’t know whether you’ve built something that works, and how long it’s going to last you.
2. Measure everything. CPU usage, memory usage, disk I/O, network I/O, requests per second, with the last one being the most important. If you don’t know the baseline, you don’t know whether you’ve improved.
3. Design your database with scalability in mind. Assume you’ll have to implement replication.
4. Do not rely on NFS for code sharing on a server farm. It’s slow and it’s got locking issues. While the idea of keeping one copy of code, and letting the rest of the servers load them via NFS might seem very convenient, it doesn’t work in practice. Stick to some tried practices like rsync. Keep the code local to the machine serving it, even if it means a longer push process.
5. Play around with I/O buffers. If you’ve got tons of memory, play with TCP buffer size - your defaults are likely to be set conservatively. See your tax dollars at work and use this Linux TCP Tuning guide. If your site is written in PHP, use output buffering functions.
6. Use Ram Disks for any data that’s disposable. But you do need a lot of available RAM lying around.
7. Optimize bandwidth consumption by enabling compression via mod_deflate, setting zlib.put_compression value to true for PHP sites, or Tidy content reduction for PHP+Tidy sites.
8. Confugure PHP for speed. Turn off the following: register_globals, auto_globals_jit, magic_quotes_gpc, expose_php, register_argc_argv, always_populate_raw_post_data, session.use_trans_sid, session.auto_start. Set session.gc_divisor to 10,000, output_buffering to 4096, in John’s example.
9. Do not use blocking I/O, such as reading another remote page via curl. Make all the calls non-blocking, otherwise the wait is something you can’t really optimize against. Rely on background scripts to pull down the data necessary for processing the request.
10. Don’t underestimate caching. If a page is cached for 5 minutes, and you get even 10 requests per second for a given page, that’s 3,000 requests your database doesn’t have to process.
11. Consider PHP op-code cache. This will be available to you off-the-shelf with PHP6.
12. For content sites consider taking static stuff out of dynamic context. Let’s say you run a content site, where the article content remains the same, while the rest of the page is personalized for each user, as it has My Articles section, and so on. Instead of getting everything dynamically from the DB, consider generating yet another PHP file on the first request, where the article text would be stored in raw HTML, and dynamic data pulled for logged-in users. This way the generated PHP file will only pull out the data that’s actually dynamic.
13. Pay great attention to database design. Learn indexes and know how to use them properly. InnoDB outperforms MyISAM in almost all contexts, but doesn’t do full-text searching. (Use sphinx if your search needs get out of control.)
14. Design PHP applications in an abstract way, so that the app never needs to know the IP address of the MySQL server. Something like ‘mysql-writer-db’, and ‘mysql-reader-db’ will be perfectly ok for a PHP app.
15. Run external scripts monitoring the system health. Have the scripts change the HOSTS if things get out of control.
16. Do not do database connectivity decision-making in PHP. Don’t spend time doing fallbacks if your primary DB is down. Consider running MySQL Proxy for simplifying DB connectivity issues.
17. For super-fast reads consider SQLite. But don’t forget that it’s horrible with writes.
18. Use Keepalive properly. Use it when both static and dynamic files are served off the same server, and you can control the timeouts, so that a bunch of Keep-alive requests don’t overwhelm your system. John’s rule? No Keep-alive request should last more than 10 seconds.
19. Monitor via familiar Linux commands. Such as iostat and vmstat. The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates. The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks. vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
20. Make sure you’re logging relevant information right away. Otherwise debugging issues is going to get tricky.
21. Prioritize your optimizations. Optimization by 50% of the code that runs on 2% of the pages will result in 1% total improvement. Optimizing 10% of the code that runs on 80% of the pages results in 8% overall improvement.
22. Use profilers. They draw pretty graphs, they’re generally easy to use.
23. Keep track of your system performance. Keep a spreadsheet of some common stats you’re tracking, so that you can authoritatively say how much of performance gain you got by getting a faster CPU, installing extra RAM, or upgrading your Linux kernel.

Read More......

Sphinx search ported to PECL

Anthony Dovgal reported on adding open source SQL full-text search engine sphinx to PECL. The documentation is available on the PHP site, the engine is available upon including sphinxapi.php in your application.

You know the usual InnoDB vs. the MyISAM trade-offs, where the former is faster, but the latter has the full-text search? Sphinx is a free open-source full-text search engine that works with many RDMBS, and now is pretty easy to incorporate into PHP. A simple example of calling Sphinx is available here:

$s = new SphinxClient;
$s->setServer("localhost", 6712);
$s->setMatchMode(SPH_MATCH_ANY);
$s->setMaxQueryTime(3);
$result = $s->query("test");

Read More......

Best practices in PHP development

1. Use source control
1. First, choose between distributed and non-distributed
2. Then, if you chose non-distributed, choose between CVS and SVN
3. In Subversion, use trunk/ for ongoing development and bug fixes, branches/ for ongoing large projects that later need to be merged in, and tags/ for releases
4. Use svn externals to connect to remote repositories
5. Subversion supports pre-commit and post-commit hooks for better code maintainability and checks



2. Implement coding standards
1. Develop class, variable, function, package, etc. naming conventions
2. Agree on common formatting as far as spacing, braces, etc.
3. Implement comment standards
4. PHP_CodeSniffer can run on pre-commit to check whether the commit adheres to the standards
5. Don’t forget to enforce coding standards on any outsourced projects
3. Unit testing and code coverage
1. Use PHPUnit for unit testing
2. For continuous integration, check out phpUnderControl
3. For integration testing, check out Selenium, a general Web application testing suite
4. Documentation
1. Don’t invent your own standards, see what phpDocumentor has to offer. Doxygen also supports phpDoc tags
2. For documenting the software project, try DocBook - XML-based format that allows you to quickly publish a PDF document, or a Website with documentation
5. Deployment
1. Have a standard deployment process that a rookie can familiarize with quickly
2. Support 3 environments - development, staging, and production
3. Deploy code only from repository tags, don’t run trunk, or allow code editing on server
4. Check out a new tag from SVN, point the symlink to it. If something goes wrong during release, change the symlink back to the previous version - easy rollback strategy
5. Everything that needs to be done on the production servers needs to be automated
6. You can do another Selenium test after the release is deployed
7. Check out Monit and Supervisord for deployment monitoring

Read More......

12 PHP optimization tips

1. If a method can be static, declare it static. Speed improvement is by a factor of 4.
2. Avoid magic like __get, __set, __autoload
3. require_once() is expensive
4. Use full paths in includes and requires, less time spent on resolving the OS paths.
5. If you need to find out the time when the script started executing, $_SERVER['REQUEST_TIME'] is preferred to time()

6. See if you can use strncasecmp, strpbrk and stripos instead of regex
7. str_replace is faster than preg_replace, but strtr is faster than str_replace by a factor of 4
8. If the function, such as string replacement function, accepts both arrays and single characters as arguments, and if your argument list is not too long, consider writing a few redundant replacement statements, passing one character at a time, instead of one line of code that accepts arrays as search and replace arguments.
9. Error suppression with @ is very slow.
10. $row['id'] is 7 times faster than $row[id]
11. Error messages are expensive
12. Do not use functions inside of for loop, such as for ($x=0; $x < count($array); $x) The count() function gets called each time.

Read More......

Online Forms? Get Vizzual!

VizzualForms is a Web-based data management system created for collecting, storing and processing data via the Internet. It allows you to create and publish Web forms in mere minutes without any programming knowledge or advanced computer skills.
It is a time- and cost-effective solution for private or business use. It is very easy to use, yet it is powerful. It includes a visual form creator, data processing tool, contact manager, invitation module and report tool.



Company Name
Vizzual


20-Word Description
The idea of VizzualForms is to allow users to create even the most complicated forms in a simple manner. It is about collecting, storing and processing data.

CEO’s Pitch
VizzualForms is a Web-based data management system created for collecting, storing and processing data via the Internet. It allows you to create and publish Web forms in mere minutes without any programming knowledge or advanced computer skills.
It is a time- and cost-effective solution for private or business use. It is very easy to use, yet it is powerful. It includes a visual form creator, data processing tool, contact manager, invitation module and report tool.

With VizzualForms users can create a broad spectrum of forms, invite others to fill in the forms and see or share the results in real time. Our integrated email notification system can send email with data to a mailbox each time a form is submitted, guaranteeing that no request or data will be missed. Forms created with VizzualForms can be embedded into Web page or delivered using an invitation module.

Mashable’s Take
There are a number of applications and services, online and off, that will accomplish the task of producing forms for numerous purposes. Tools featured within the near-ubiquitous Microsoft Office can manage things like surveys and registration forms and whatnot. The same with Apple’s iWork suite and other similar releases. Meanwhile, Web-based options like FormSite, ExtremeForm, JotForm, and Wufoo can accomplish needed designs with varying degrees of customization and interactivity.
Vizzual Forms is one such character. Users of Vizzual Forms are served with what basically amount to two utilities. One intended for form assembly, the other for data analysis. The first is fairly intuitive to operate. It generally works the way the average user might expect it to. Its strongest features are for editing. Sending things to a user’s contact list might be a little tedious at first, but so long as you maintain an account with regularity, you’ll learn the proverbial ropes pretty quickly, we think. Besides, if you’re conducting an online poll or making some other request, embeds can be easily made.

Depending on your level of engagement, you can make do with a cost-free and ad-supported account, or spring for a Basic ($8.50), Small Biz ($16.50), or Enterprise-level registration ($31.50). Companies looking to host Vizzual on their own servers can even pony up $1,900 for the privilege.

To be honest, what Vizzual Forms will help you produce won’t be aesthetically brilliant by any means. The examples it reveals are pretty ordinary. Apart from a palette of colors, you might say things are a bit boring. But rarely is the act of filling a form with information a truly fun endeavor. So the important thing, then, is for the service to help Joe Creator ease through the process as best he or she can. And Vizzual Forms appears conditioned to do just that. You can certainly be specific with an editing job if you like. Nothing really stops you from spending a great deal of time fine-tuning a poll or whathaveyou. But you’ll rarely, if ever, be lost for a visual guide on how to get it done.


Read More......

Monday, November 3, 2008

Understanding a Robots.txt File

The robots.txt file allows you to control the behavour of web crawlers and spiders that visit your site. Most web crawlers are harmless and simply collect data for various reasons like search engine listings, internet archiving, validating links, security scanning, etc. It's always a good idea to create a robots.txt to tell the crawlers where they can go and where they can not.
A crawler should always follow the "The Robots Exclusion Protocol" and therefore whever it comes to a web site to crawl it, it first checks the robots.txt file.


www.yourdomain.com/robots.txt


Once it has processed the robots.txt file it will then proceed to the rest of your site usually starting at the index file and traversing throughout. There are quite often places on a web site which do not need to be crawled, like the images directory, data directories, etc so these are what you need to place into your robots file.

The "/robots.txt" file is simply a text file, which contains one or more records. A single record looking like this:

User-agent: *
Disallow: /


The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the web site.

A basic tobots.txt example

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/


Allowing a single crawler

User-agent: Google
Disallow:

User-agent: *
Disallow: /


To exclude a single robot

User-agent: BadBot
Disallow: /

Read More......

People-search sites Reunion.com, Wink to merge

Social network Reunion.com has made a new friend: people search service Wink. The two have merged in a new deal that promises to make it dramatically easier to find people on the Web.Early next year, the merger will produce "an entirely new brand," the companies said. The two have not said what its name will be, nor have financial details been disclosed. With the dual technologies of Reunion and Wink, the companies say that they will be able to search more than 700 million social-networking profiles. They'll be able to search profiles on MySpace, Facebook, LinkedIn, Friendster, AOL's Bebo, Microsoft's Windows Live Spaces, Yahoo, Xanga, and Twitter--among others.

Numbers from Nielsen last month indicated that Reunion.com, which says it receives 12 million unique visitors each month, is one of the fastest-growing social networks in the U.S. despite the fact that it's hardly on the radar of Twittering blog pundits. Its biggest demographic, according to Nielsen, is those between 55 and 64 who are looking to re-connect with friends and classmates.

"Through this merger, we're redefining the people search space by bridging existing social networks and providing consumers with the tools they need to find, be found, and stay connected," Wink CEO Michael Tanne said in a release. "We're aiming to create an entirely new online experience that simplifies people's lives by making it easy to find and keep up with everyone they know. There will be exciting developments in the coming months as we integrate our strengths and push our business forward."

News Source

Read More......

RockYou looks to Asia with new $17 million investment

Investments to the tune of $17 million are a rarity these days, but app-factory RockYou has done just that: the San Francisco-based company has announced that Japanese mobile giant SoftBank and Korean telecom investment company SK Telecom Ventures have invested $17 million to create a new joint venture to build apps for the Asia-Pacific market.
RockYou's Series C venture round, which pulled in $35 million, was in June--with the fresh $17 million, the company has raised $67 million so far.

This marks the entry of RockYou, which is best known for its Facebook and MySpace widgets, into the mobile space. "In Asia, over half the social networking occurs on mobile," CEO Lance Tokuda told CNET News. "It's both Web and mobile, and we think we'll get good penetration. The results on (Chinese social network) Xiaonei so far have been very good." RockYou says it is the first non-Chinese company to build apps on Xiaonei.

There will be a separate team handling RockYou's new Asia-Pacific operations, with operations coming from the new joint-venture investors as well. "In a lot of cases it's more cultural, where they'll take our assets and they'll port them and localize them," Tokuda said.

But there will be synergy as well, with mobile apps likely coming to the U.S. market after they're released in Asia. SoftBank is the Japanese carrier for Apple's iPhone, and iPhone apps created for it may eventually be converted to U.S. versions.

"We have no U.S. iPhone apps, and yes, we will port them back (from Asia)," Tokuda said.

So is the company giving up on Facebook's platform? No, Tokuda said, adding that they plan to keep building for it. Nor is the round specifically designed as recession padding, he added.

"There's still opportunity out there," he explained. "That said, it's good to raise a lot of money and have money in the bank, and this latest strategic round helps."

News Source

Read More......

Thursday, October 30, 2008

Searching for All titles inside a directory of a page using PHP

We have seen in Part 1 of this tutorial how to read the title tags of an html file. Now we will develop a script for reading all the title tags of the files present inside a directory. The basic script remains same and we will only be keeping this basic script within a while loop. This while loop will list all the files present inside a directory.

You must read how in part 1 we have developed the code to handle a file in read mode and collect the text between title tags. Also read how the directory handler works to display all the files.

Here is the code to handle the directory listing.

$path="../dir-name/";// Right your path of the directory
$handle=opendir($path);
while (($file_name = readdir($handle))!==false) {


We can open files of particular type by using its extensions. Here we will use one if condition to add or exclude different types of files. ( read more on stristr())

if(stristr($file_name,".php")){ // read the file now }


Rest of the code is same as part 1. So here is the complete code.

<?
///////////////function my_strip///////////
function my_strip($start,$end,$total){
$total = stristr($total,$start);
$f2 = stristr($total,$end);
return substr($total,strlen($start),-strlen($f2));
}
/////////////////////End of function my_strip ///
///////////// Reading of file content////
$i=0;
$path="../dir-name/";// Right your path of the file

$handle=opendir($path);
while (($file_name = readdir($handle))!==false) {

if(stristr($file_name,".php")){
$url=$path.$file_name;

$contents="";
$fd = fopen ($url, "r"); // opening the file in read mode
while($buffer = fread ($fd,1024)){
$contents .=$buffer;
}
/////// End of reading file content ////////
//////// We will start with collecting title part ///////
$t=my_strip("<title>","</title>",$contents);
echo $t;
echo "<br>";
$i=$i+1;
}
}
echo $i;
?>

Article Source

Read More......

Wednesday, October 29, 2008

Searching for title inside a file using PHP

We can collect the text written between two landmarks inside a file. These landmarks can be starting and ending of html tags. So what ever written within a tag can be copied or collected for further processing. Before we go for an example, read the tutorial on how to get part of a string by using one staring and ending strings.

Let us try to understand this by an example. We will try to develop a script which will search and collect the text written within the title tags of a page. Read here if you want to know more about title tags in an html page. Here is an example of title tag.
<title>This is the title text of a page</title>

As you can see within the page we can use starting and ending title tags or any pair of tags as two landmarks and collect the characters or string within it.

Now let us learn how to open a file and read the content. Here is the code part to do that.

$url="../dir-name/index.php";
$contents="";
$fd = fopen ($url, "r"); // opening the file in read mode
while($buffer = fread ($fd,1024)){
$contents .=$buffer;
}


Now as we have the content of the file stored in a variable $content , we will use our function my_strip ( read details about my_strip function here) to collect the title part only from the variable and print to the screen.

$t=my_strip("<title>","</title>",$contents);
echo $t;


With this we can give any URL and see the title of the file. This way like title tag we can read any other tag like meta keywords, meta description, body tag etc of a page. You can see many applications can be developed using this but let us try to develop few more things from this.

First is reading all the files of a directory and displaying all the titles of the files inside that directory
Second develop a hyperlink using these titles to query google for these titles. ( think why ? )

The above two codes we will discuss in next section. Before that here is the full code as we discussed in the above tutorial.

<?
///////////////function my_strip///////////
function my_strip($start,$end,$total){
$total = stristr($total,$start);
$f2 = stristr($total,$end);
return substr($total,strlen($start),-strlen($f2));
}
/////////////////////End of function my_strip ///
///////////// Reading of file content////
$url="../dir-name/index.php";// Right your path of the file
$contents="";
$fd = fopen ($url, "r"); // opening the file in read mode
while($buffer = fread ($fd,1024)){
$contents .=$buffer;
}
/////// End of reading file content ////////
//////// We will start with collecting title part ///////
$t=my_strip("<title>","</title>",$contents);
echo $t;
?>



Once we know the title text , we can go for str_ireplace command to replace the old title with new title and then write the content to file again.

Article Source

Read More......

Getting the last updated time of the file in PHP

We can get the last updated date of any file by using filemtime function in PHP. This function returns date in UNIX Timestamp format and we can convert it to our requirement by using date function.




This function filemtime() uses the server file system so it works for local systems only, we can't use this function to get the modified time of any remote file system.


Here is the code to get the last modified date of any file. We are checking for a existing file ( test.php)

echo date("m/d/Y",filemtime(“test.php”));


The above code will display the modified date in month/day/year format.
Note that we have used date function to convert the Unix Timestamp time returned by filemtime() function.

Article Source

Read More......

How to get the file name of the current loaded script using PHP ?

We can get the current file name or the file executing the code by using SCRIPT_NAME. This will give us the path from the server root so the name of the current directory will also be included. Here is the code.


$file = $_SERVER["SCRIPT_NAME"];
echo $file;


The above lines will print the present file name along with the directory name. For example if our current file is test.php and it is running inside my_file directory then the output of above code will be.

/my_file/test.php

We will add some more code to the above line to get the only file name from the above code. We will use explode command to break the string by using delimiter “/” .

As the output of this explode command is an array then we will collect the last element of this array to get our file name. Here the index of last element of the array is total element of the array minus one, because the index of the elements start from 0 ( not from one ). So index of the last element of the array = total number of elements – 1

Here is the code to get the last element of the array with the explode command to get the array.

$break = Explode('/', $file);
$pfile = $break[count($break) - 1];


Here $pfile is the variable which will have the value of present file name.

We can use $pfile in different application where current file name is required .

Here is the complete code.

$file = $_SERVER["SCRIPT_NAME"];
$break = Explode('/', $file);
$pfile = $break[count($break) - 1];

echo $pfile;

Article Source

Read More......

How to delete all the files in a directory using PHP ?

We have seen how a file can be deleted by using unlink function in PHP. The same function can be used along with directory handler to list and delete all the files present inside. We have discussed how to display all the files present inside a directory. Now let us try to develop a function and to this function we will post directory name as parameter and the function will use unlink command to remove files by looping through all the files of the directory.



Here is the code to this.

function EmptyDir($dir) {
$handle=opendir($dir);

while (($file = readdir($handle))!==false) {
echo "$file
";
@unlink($dir.'/'.$file);
}

closedir($handle);
}

EmptyDir('images');


Here images is the directory name we want to empty

Article Source

Read More......

How to delete a file using PHP ?

We can delete files by giving its URL or path in PHP by using unlink command. This command will work only if write permission is given to the folder or file. Without this the delete command will fail. Here is the command to delete the file.

unlink($path);


Here $path is the relative path of the file calculated from the script execution. Here is an example of deleting file by using relative path

$path="images/all11.css";
if(unlink($path)) echo "Deleted file ";


We have used if condition to check whether the file delete command is successful or not. But the command below will not work.

$path="http://domainname/file/red.jpg";
if(unlink($path)) echo "Deleted file ";


The warning message will say unlink() [function.unlink]: HTTP does not allow unlinking

Article Source

Read More......

How to write to a file using PHP

We can write to a file by using fwrite() function PHP. Please note that we have to open the file in write mode and if write permission is there then only we can open it in write mode. If the file does not exist then one new file will be created. We can change the permission of the file also. You can read the content of a file by using fopen() function in PHP. This is the way to write entries to a guestbook, counter and many other scripts if you are not using any database for storing data. . Here we will see how to write to a file.


<?
$body_content="This is my content"; // Store some text to enter inside the file
$file_name="test_file.txt"; // file name
$fp = fopen ($file_name, "w");
// Open the file in write mode, if file does not exist then it will be created.
fwrite ($fp,$body_content); // entering data to the file
fclose ($fp); // closing the file pointer
chmod($file_name,0777); // changing the file permission.
?>

Read More......

PHP File open to read internal file

WE can open a file or a URL to read by using fopen() function of PHP. While opening we can give mode of file open ( read, write.. etc ). By using fopen we can read any external url also. We can write to a file by using fwrite function. Let us start with reading one internal file ( of the same site ). We have a file name as delete.htm. We will use the command fopen() to open the file in read mode.

We will be using fread() function to read the content by using a file pointer. Fread() reads up to lengthbytes from the file pointer referenced by fd. Reading stops when length bytes have been read or EOF is reached, whichever comes first.

We have also used the function filesize() to know the size of the file and used it in the fread function.
We will be using all these functions to read the content of another file and print the content as out put. Here is the code.

<?
$filename = "delete.htm"; // This is at root of the file using this script.
$fd = fopen ($filename, "r"); // opening the file in read mode
$contents = fread ($fd, filesize($filename)); // reading the content of the file
fclose ($fd); // Closing the file pointer
echo $contents; // printing the file content of the file
?>

Article Source

Read More......

JavaScript and memory leaks

Credits: This tutorial is written by Volkan. He runs the site Sarmal.com, a bilingual site featuring all his work, products, services, and up-to-date profile information in English and Turkish.

If you are developing client-side re-usable scripting objects, sooner or later you will find yourself spotting out memory leaks. Chances are that your browser will suck memory like a sponge and you will hardly be able to find a reason why your lovely DHTML navigation's responsiveness decreases severely after visiting a couple of pages within your site.

A Microsoft developer Justing Rogers has described IE leak patterns in his excellent article.

In this article, we will review those patterns from a slightly different perspective and support it with diagrams and memory utilization graphs. We will also introduce several subtler leak scenarios. Before we begin, I strongly recommend you to read that article if you have not already read.

Why does the memory leak?

The problem of memory leakage is not just limited to Internet Explorer. Almost any browser (including but not limited to Mozilla, Netscape and Opera) will leak memory if you provide adequate conditions (and it is not that hard to do so, as we will see shortly). But (in my humble opinion, ymmv etc.) Internet Explorer is the king of leakers.

Don't get me wrong. I do not belong to the crowd yelling "Hey IE has memory leaks, checkout this new tool [link-to-tool] and see for yourself". Let us discuss how crappy Internet Explorer is and cover up all the flaws in other browsers".

Each browser has its own strengths and weaknesses. For instance, Mozilla consumes too much of memory at initial boot, it is not good in string and array operations; Opera may crash if you write a ridiculously complex DHTML script which confuses its rendering engine.

Although we will be focusing on the memory leaking situations in Internet Explorer, this discussion is equally applicable to other browsers.

A simple beginning


[Exhibit 1 - Memory leaking insert due to inline script]

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
var parentDiv =
document.createElement("<div onclick='foo()'>");

parentDiv.bigString = new Array(1000).join(
new Array(2000).join("XXXXX"));
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>


The first assignment parentDiv=document.createElement(...); will create a div element and create a temporary scope for it where the scripting object resides. The second assignment parentDiv.bigString=... attaches a large object to parentDiv. When LeakMemory() method is called, a DOM element will be created within the scope of this function, a very large object will be attached to it as a member property and the DOM element will be de-allocated and removed from memory as soon as the function exits, since it is an object created within the local scope of the function.

When you run the example and click the button a few times, your memory graph will probably look like this:



Increasing the frequency



No visible leak huh? What if we do this a few hundred times instead of twenty, or a few thousand times? Will it be the same? The following code calls the assignment over and over again to accomplish this goal:

[Exhibit 2 - Memory leaking insert (frequency increased) ]

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
for(i = 0; i < 5000; i++){
var parentDiv =
document.createElement("<div onClick='foo()'>");
}
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>
And here follows the corresponding graph:




The ramp in the memory usage indicates leak in memory. The horizontal line (the last 20 seconds) at the end of the ramp is the memory after refreshing the page and loading another (about:blank) page. This shows that the leak is an actual leak and not a pseudo leak. The memory will not be reclaimed unless the browser window and other dependant windows if any are closed.

Assume you have a dozen pages that have similar leakage graph. After a few hours, you may want to restart your browser (or even your PC) because it just stops responding. The naughty browser is eating up all your resources. However, this is an extreme case because Windows will increase the virtual memory size as soon as your memory consumption reaches a certain level.

This is not a pretty scenario. Your client/boss will not be very happy, if they discover such a situation in the middle of a product showcase/training/demo.

A careful eye may have caught that there is no bigString in the second example. This means that the leak is merely because of the internal scripting object (i.e. the anonymous script onclick='foo()'). This script was not deallocated properly. This caused memory leak at each iteration. To prove our thesis let us run a slightly different test case:

[Exhibit 3 - Leak test without inline script attached]

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
for(i = 0; i < 50000; i++){
var parentDiv =
document.createElement("div");
}
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>


And here follows the corresponding memory graph:







JavaScript and memory leaks

Credits: This tutorial is written by Volkan. He runs the site Sarmal.com, a bilingual site featuring all his work, products, services, and up-to-date profile information in English and Turkish.

If you are developing client-side re-usable scripting objects, sooner or later you will find yourself spotting out memory leaks. Chances are that your browser will suck memory like a sponge and you will hardly be able to find a reason why your lovely DHTML navigation's responsiveness decreases severely after visiting a couple of pages within your site.

A Microsoft developer Justing Rogers has described IE leak patterns in his excellent article.

In this article, we will review those patterns from a slightly different perspective and support it with diagrams and memory utilization graphs. We will also introduce several subtler leak scenarios. Before we begin, I strongly recommend you to read that article if you have not already read.
Why does the memory leak?

The problem of memory leakage is not just limited to Internet Explorer. Almost any browser (including but not limited to Mozilla, Netscape and Opera) will leak memory if you provide adequate conditions (and it is not that hard to do so, as we will see shortly). But (in my humble opinion, ymmv etc.) Internet Explorer is the king of leakers.

Don't get me wrong. I do not belong to the crowd yelling "Hey IE has memory leaks, checkout this new tool [link-to-tool] and see for yourself". Let us discuss how crappy Internet Explorer is and cover up all the flaws in other browsers".

Each browser has its own strengths and weaknesses. For instance, Mozilla consumes too much of memory at initial boot, it is not good in string and array operations; Opera may crash if you write a ridiculously complex DHTML script which confuses its rendering engine.

Although we will be focusing on the memory leaking situations in Internet Explorer, this discussion is equally applicable to other browsers.
A simple beginning

Let us begin with a simple example:

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
var parentDiv =
document.createElement("<div onclick='foo()'>");

parentDiv.bigString = new Array(1000).join(
new Array(2000).join("XXXXX"));
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>

The first assignment parentDiv=document.createElement(...); will create a div element and create a temporary scope for it where the scripting object resides. The second assignment parentDiv.bigString=... attaches a large object to parentDiv. When LeakMemory() method is called, a DOM element will be created within the scope of this function, a very large object will be attached to it as a member property and the DOM element will be de-allocated and removed from memory as soon as the function exits, since it is an object created within the local scope of the function.

When you run the example and click the button a few times, your memory graph will probably look like this:

Increasing the frequency

No visible leak huh? What if we do this a few hundred times instead of twenty, or a few thousand times? Will it be the same? The following code calls the assignment over and over again to accomplish this goal:

[Exhibit 2 - Memory leaking insert (frequency increased) ]

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
for(i = 0; i < 5000; i++){
var parentDiv =
document.createElement("<div onClick='foo()'>");
}
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>


And here follows the corresponding graph:

The ramp in the memory usage indicates leak in memory. The horizontal line (the last 20 seconds) at the end of the ramp is the memory after refreshing the page and loading another (about:blank) page. This shows that the leak is an actual leak and not a pseudo leak. The memory will not be reclaimed unless the browser window and other dependant windows if any are closed.

Assume you have a dozen pages that have similar leakage graph. After a few hours, you may want to restart your browser (or even your PC) because it just stops responding. The naughty browser is eating up all your resources. However, this is an extreme case because Windows will increase the virtual memory size as soon as your memory consumption reaches a certain level.

This is not a pretty scenario. Your client/boss will not be very happy, if they discover such a situation in the middle of a product showcase/training/demo.

A careful eye may have caught that there is no bigString in the second example. This means that the leak is merely because of the internal scripting object (i.e. the anonymous script onclick='foo()'). This script was not deallocated properly. This caused memory leak at each iteration. To prove our thesis let us run a slightly different test case:

[Exhibit 3 - Leak test without inline script attached]

<html>
<head>
<script type="text/javascript">
function LeakMemory(){
for(i = 0; i < 50000; i++){
var parentDiv =
document.createElement("div");
}
}
</script>
</head>
<body>
<input type="button"
value="Memory Leaking Insert" onclick="LeakMemory()" />
</body>
</html>


And here follows the corresponding memory graph:

As you can see, we have done fifty thousand iterations instead of 5000, and still the memory usage is flat (i.e. no leaks). The slight ramp is due to some other process in my PC.

Let us change our code in a more standard and somewhat unobtrusive manner (not the correct term here, but can't find a better one) without embedded inline scripts and re-test it.

Article Source

Read More......

Dynamically removing/ replacing an external JavaScript or CSS file

Any external JavaScript
or CSS file, whether added manually or dynamically, can be removed from the page. The end result may not be fully what you had in mind, however. I'll talk about this a little later.

Dynamically removing an external JavaScript
or CSS file


To remove an external JavaScript or CSS file from a page, the key is to hunt them down first by traversing the DOM, then call DOM's removeChild() method to do the hit job. A generic approach is to identify an external file to remove based on its file name, though there are certainly other approaches, such as by CSS class name. With that in mind, the below function removes any external JavaScript or CSS file based on the file name entered:

function removejscssfile(filename, filetype){
var targetelement=(filetype=="js")? "script" : (filetype=="css")? "link" : "none" //determine element type to create nodelist from
var targetattr=(filetype=="js")? "src" : (filetype=="css")? "href" : "none" //determine corresponding attribute to test for
var allsuspects=document.getElementsByTagName(targetelement)
for (var i=allsuspects.length; i>=0; i--){ //search backwards within nodelist for matching elements to remove
if (allsuspects[i] && allsuspects[i].getAttribute(targetattr)!=null && allsuspects[i].getAttribute(targetattr).indexOf(filename)!=-1)
allsuspects[i].parentNode.removeChild(allsuspects[i]) //remove element by calling parentNode.removeChild()
}
}

removejscssfile("somescript.js", "js") //remove all occurences of "somescript.js" on page
removejscssfile("somestyle.css", "css") //remove all occurences "somestyle.css" on page


The function starts out by creating a collection out of either all "SCRIPT" or "LINK" elements on the page depending on the desired file type to remove. The corresponding attribute to look at also changes accordingly ("src" or "href" attribute). Then, the function sets out to loop through the gathered elements backwards to see if any of them match the name of the file that should be removed. There's a reason for the reversed direction- if/whenever an identified element is deleted, the collection collapses by one element each time, and to continue to cycle through the new collection correctly, reversing the direction does the trick (it may encounter undefined elements, hence the first check for allsuspects[i] in the if statement). Now, to delete the identified element, the DOM method parentNode.removeChild() is called on it.

So what actually happens when you remove an external JavaScript or CSS file? Perhaps not entirely what you would expect actually. In the case of JavaScript, while the element is removed from the document tree, any code loaded as part of the external JavaScript file remains in the browser's memory. That is to say, you can still access variables, functions etc that were added when the external file first loaded (at least in IE7 and Firefox 2.x). If you're looking to reclaim browser memory by removing an external JavaScript, don't rely on this operation to do all your work. With external CSS files, when you remove a file, the document does reflow to take into account the removed CSS rules, but unfortunately, not in IE7 (Firefox 2.x and Opera 9 do).

Dynamically replacing an external JavaScript or CSS file


Replacing an external JavaScript or CSS file isn't much different than removing one as far as the process goes. Instead of calling parentNode.removeChild(), you'll be using parentNode.replaceChild() to do the bidding instead:

function createjscssfile(filename, filetype){
if (filetype=="js"){ //if filename is a external JavaScript file
var fileref=document.createElement('script')
fileref.setAttribute("type","text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype=="css"){ //if filename is an external CSS file
var fileref=document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
return fileref
}

function replacejscssfile(oldfilename, newfilename, filetype){
var targetelement=(filetype=="js")? "script" : (filetype=="css")? "link" : "none" //determine element type to create nodelist using
var targetattr=(filetype=="js")? "src" : (filetype=="css")? "href" : "none" //determine corresponding attribute to test for
var allsuspects=document.getElementsByTagName(targetelement)
for (var i=allsuspects.length; i>=0; i--){ //search backwards within nodelist for matching elements to remove
if (allsuspects[i] && allsuspects[i].getAttribute(targetattr)!=null && allsuspects[i].getAttribute(targetattr).indexOf(oldfilename)!=-1){
var newelement=createjscssfile(newfilename, filetype)
allsuspects[i].parentNode.replaceChild(newelement, allsuspects[i])
}
}
}

replacejscssfile("oldscript.js", "newscript.js", "js") //Replace all occurences of "oldscript.js" with "newscript.js"
replacejscssfile("oldstyle.css", "newstyle", "css") //Replace all occurences "oldstyle.css" with "newstyle.css"


Notice the helper function createjscssfile(), which is essentially just a duplicate of loadjscssfile() as seen on the previous page, but modified to return the newly created element instead of actually adding it to the page. It comes in handy when parentNode.replaceChild() is called in replacejscssfile() to replace the old element with the new. Some good news here- when you replace one external CSS file with another, all browsers, including IE7, will reflow the document automatically to take into account the new file's CSS rules.

Conclusion


So when is all this useful? Well, in today's world of Ajax
and ever larger web applications
, being able to load accompanying JavaScript/ CSS files asynchronously and on demand is not only handy, but in some cases, necessary. Have fun finding out what they are, or implementing the technique. :)

Article Source

Read More......

Dynamically loading an external JavaScript or CSS file

The conventional way to loading external JavaScript (ie: .js) and CSS (ie: .css) files on a page is to stick a reference to them in the HEAD section of your page, for example:

<head>
<script type="text/javascript" src="myscript.js"></script>
<link rel="stylesheet" type="text/css" href="main.css" />
</head>


Files that are called this way are added to the page as they are encountered in the page's source, or synchronously. For the most part, this setup meets our needs just fine, though in the world of synchronous Ajax design patterns, the ability to also fire up JavaScript/ CSS on demand
is becoming more and more handy. In this tutorial, lets see how it's done.

Dynamically loading external JavaScript and CSS files

To load a .js or .css file dynamically, in a nutshell, it means using DOM methods to first create a swanky new "SCRIPT" or "LINK" element, assign it the appropriate attributes, and finally, use element.appendChild() to add the element to the desired location within the document tree. It sounds a lot more fancy than it really is. Lets see how it all comes together:

function loadjscssfile(filename, filetype){
if (filetype=="js"){ //if filename is a external JavaScript file
var fileref=document.createElement('script')
fileref.setAttribute("type","text/javascript")
fileref.setAttribute("src", filename)
}
else if (filetype=="css"){ //if filename is an external CSS file
var fileref=document.createElement("link")
fileref.setAttribute("rel", "stylesheet")
fileref.setAttribute("type", "text/css")
fileref.setAttribute("href", filename)
}
if (typeof fileref!="undefined")
document.getElementsByTagName("head")[0].appendChild(fileref)
}

loadjscssfile("myscript.js", "js") //dynamically load and add this .js file
loadjscssfile("javascript.php", "js") //dynamically load "javascript.php" as a JavaScript file
loadjscssfile("mystyle.css", "css") ////dynamically load and add this .css file


Since external JavaScript and CSS files can technically end with any custom file extension (ie: "javascript.php"), the function parameter "filetype" lets you tell the script what file type to expect before loading. After that, the function sets out to create the element using the appropriate DOM methods, assign it the proper attributes, and finally, add it to the end of the HEAD section. Now, where the created file gets added is worth elaborating on:

document.getElementsByTagName("head")[0].appendChild(fileref)


By referencing the HEAD element of the page first and then calling appendChild(), this means the newly created element is added to the very end of the HEAD tag. Furthermore, you should be aware that no existing element is harmed in the adding of the new element- that is to say, if you call loadjscssfile("myscript.js", "js") twice, you now end up with two new "SCRIPT" elements both pointing to the same JavaScript file. This is problematic only from an efficiency standpoint, as you'll be adding redundant elements to the page and using unnecessary browser memory in the process. A simple way to prevent the same file from being added more than once is to keep track of the files added by loadjscssfile(), and only load a file if it's new:

var filesadded="" //list of files already added

function checkloadjscssfile(filename, filetype){
if (filesadded.indexOf("["+filename+"]")==-1){
loadjscssfile(filename, filetype)
filesadded+="["+filename+"]" //List of files added in the form "[filename1],[filename2],etc"
}
else
alert("file already added!")
}

checkloadjscssfile("myscript.js", "js") //success
checkloadjscssfile("myscript.js", "js") //redundant file, so file not added


Here I'm just crudely detecting to see if a file that's set to be added already exists within a list of added files' names stored in variable filesadded before deciding whether to proceed or not.

Ok, moving on, sometimes the situation may require that you actually remove or replace an added .js or .css file. Lets see how that's done next.

Article Source

Read More......

Tuesday, October 28, 2008

Introduction to HTML

Are you new to HTML ? OK we will try to learn this language of internet here. HTML stands for Hyper Text Markup Language. You may be knowing other computer languages like C, C++, Basic, Foxpro etc and now one more language in this line is HTML. But there is a difference here, like other languages HTML is not a scripting language or a compiled code for the computers to execute. HTML is mostly consist of tags which we will use in our text document for the web browser to understand. Let us discuss one simple example. We want some part of the following line to be written in bold letters and some part in Italic letters.

HTML is the language of the internet.

In your word processor software you can easily do this but please note that there has to be a standard way of doing this which can be easily understood by browser operating at different platforms. So one universal way of formatting the text is required here for easy understanding of browsers operating on across the platforms. So in HTML we will write the above line in this way.

HTML is the <b>language</b> of the <i>internet</i>

Here once this text is opened by the user browser it can understand the format and it display the text by taking care of the tags used. So the tags plays important role in formatting the text in an html document. The user browser interprets the tags and display in required format.

View Source of the html page

As the browsers display the text it gives an option to the user to view the source of the page or the text with tags. This feature is available by visiting View > Source at top menu in internet explorer and View > Page Source in Firefox browser. This way we can see the html formatted text of any site.

Let us try for your first html page now.

Open your note pad or any other text editor. Copy and paste the following code inside it.

<html>
<head>
<title>(Type a title for your page here)</title>
<META NAME="DESCRIPTION" CONTENT=" ">
<META NAME="KEYWORDS" CONTENT=" ">
</head>

<body >

Hello <br>
Welcome to plus2net.com

</body>

</html>

Save this as test.htm. In Win Note pad take care that you enter the file name with quotes like this “test.htm”. Open this file in your browser ( or just double click the file in your file explorer ) .

You will see a message like this

Hello
Welcome to plus2net.com

Note the line break between Hello and Welcome to plus2net.com. We have used one line break tag
and browser has placed a line break after reading this line break tag. Now from the browser menu visit View > Source . You can see your original source code there.

Try to develop more such pages by using different tags.
Discuss this tutorial at forum

Source

Read More......

How to open pages in new window

We move from one page to other page of a web site by using hyper links or simply links. These links on clicking opens the new page in same window. Links can have absolute URL or relative URL or the address of the page we want to move. To get the full details on different types of links visit hyper link page.

While designing hyper links we can create links to open the new page in a new window. This way we can keep the existing window open with out disturbing the current page. In your website you may not like your visitors to click any external link and leave your site. So you can modify the link and tell the browser to open the external site in a new window. Here is a simple link

new site

To the above query we will add a new component saying target=new or target = blank. Like this

new site

Or

new site

Here are two links, one will open the html tutorial list in same page, and other will open the same page by opening a new window.

Read More......

Microformats vs. RDF: How Microformats Relate to the Semantic Web

Microformats are a wildly popular set of formats for embedding metadata within normal XHTML. The primary advantage Microformats offer over RDF (including its embedded serializations) is that you can embed metadata directly in the XHTML, reducing the amount of markup you need to write (e.g. you don't have to write XHTML and additional RDF). Many people have contended that Microformats are a possible replacement for RDF, however Microformats were not designed to cover the same scope as RDF was. While both Microformats and RDF make it possible to store data about data, they simply do not work to solve the same set of problems.A quick comparison

I don't blame the Microformats people for this confusion over what Microformats are or are not. Rather, I blame the sensationalists and know-nots that tend to jump on any new standard, format, or design pattern. Directly on the Microformats about page you are told what Microformats are and are not.

What Microformats were not intended to be:

* A new language
* Infinitely extensible and open-ended
* An attempt to get everyone to change their behavior and rewrite their tools
* A whole new approach that throws away what already works today
* A panacea for all taxonomies, ontologies, and other such abstractions
* Defining the whole world, or even just boiling the ocean
* Any of the above

There you have it, clearly stated and all. I would guess that most of the arguments made by pro-RDF people are extinguished after reading that unordered list. However some people still believe that we can create the Semantic Web with Microformats.

What RDF allows (and Microformats lacks):

* Resources are represented as URIs, allowing you to access metadata remotely
* Infinitely extensible and open-ended design
* A powerful Ontology language (OWL) that is built upon it
* The ability to utilize, share, and extend any number of vocabularies
* No reliance on pre-defined "formats" (i.e. not limited by the types of data that can be encoded)

As you can see there are a few things we can do with RDF that cannot be done with Microformats. The Semantic Web relies on the things I've listed above. These are the clear-cut reasons why Microformats will not be part of the W3C's Semantic Web vision.
Persisting the data within Microformats

Another issue I've thought about is how we are to persist the data we glean from Microformats. How do you usefully store Microformat metadata (beyond leaving it in its XHTML form)? The information stored in Microformats eventually comes out in triple form, one way or the other. Take a look at this example:


home:
+1.415.555.1212


What information can be gleaned from this example? Well, the home telephone number (of an unknown person or entity, in this example) is +1.415.555.1212. In the end we are still getting the subject-predicate-object form. In this case the subject would be the owner of that number, the predicate would be "home," and the object is the telephone number itself.

So really, we will likely require triple storage for either RDF or Microformats. In all honesty, I don't know of any Microformat-stores. If you know of some, I would like to know if they are any different from a normal triple-store.

Microformats have a place and a purpose

At this point I'd like to say that Microformats do have a number of qualities that RDF (although not necessarily all serializations) does not accommodate for, at least not in the same way:

* Designed for humans first, machines second
* Modularity / embeddability
* Enables and encourages decentralized development, content, services
* A design principles for formats
* Adapted to current behaviors and usage patterns
* Highly correlated with semantic XHTML

I've stated before that I believe Microformats will help bring about the Semantic Web by introducing "metadata sprinkling" (the act of including metadata in otherwise "normal" data) to more people. They allow for simple metadata embeddability and do not affect how an XHTML document validates. This is the kind of approach that will help normal users come closer to understanding the Semantic Web vision.
Conclusions

To me, Microformats are to RDF as HTML 5 is to XHTML; on the surface they both appear to be a solution to the same problem, but the former misses the point as to why the latter was created. On the very same about page I cited earlier there is a bullet point that suggests that Microformats will be part of the semantic web (note the lowercase letters, implying a semantic web, not the one envisioned by the W3C). I find that all competing Semantic Web development paths fall short of creating an entirely linked Semantic Web. The kind of Semantic Web that gives us a platform to stand on above the Web document layer. Microformats have their place, just not as a replacement to RDF.

Read More......

Brochure Design: Tips and Techniques

A brochure is known as the advertising tool that carries eye catching designs and attractive language to attract people to get its proper meaning. Brochures are used to design for the promotion of locations, events, hotels, products and services. Usually, brochures are being distributed in trade shows and through direct mail and can be used for promoting a new product.

Brochure design is considered as a tough task for a designer. Due to its importance, brochure has to be designed really very carefully. The best brochure design is that which elaborates ideas perfectly and advocates people to use products or service in which favor a brochure has been formed. So, if you are planning to make a customized brochure design, make sure it reflects your mind clearly and reaches directly to your customers. You can your brochure by yourself or hire a professional brochure designer to make a really effective brochure for your purpose.

Once you decide to make a brochure, you need to select a good brochure design sample to look really difference from the rest. You may choose something classy and distinguish to attract customers never before. In a brochure, many things need to be kept in mind before designing its structure. First thing which needs to be taken care of is its structure. Structure means what size of brochure should be made, color combination, usually attractive colors are used to clinch peoples' interest. Second is the selection of words, which is really important. Words should be easy to understand, infuses a good meaning and represents your ideology. Third is the selection of graphic designing to be used in the brochure. Graphic design brochure should be carefully done and should look professional and infuse its deep meaning.

Normally, business people need corporate brochure design to promote their products and services. It is a unique way of attracting customers and retaining old ones to gain stability and raise profit. Brochures are considered as the campaign and marketing printed stuffs to help you to expand your business aspirations beyond the boundaries. Getting brochures fro products and services, a company can invite new customers, maintaining the current ones and earning more and more profit for the business. Brochures are taken as the evidences of the company's genuine offerings and trusted services which it promises. So, till now you haven't designed a good brochure yet, go and design a customized brochure design that says more about your style and above all your mind.

Article Source

Read More......

Web Design and the DMCA: Giving and Getting Take Down Notices

Does your client ask you to stand behind the content you create for their site? Most clients worth their salt will, and successful freelance and web design firms know enough about DMCA take-down notices to do so comfortably. Do you?
What is the DMCA?

If you get a take-down notice from your web host, client, or content publisher, you should understand the implications of taking down the content. You should also know that the DMCA (17 USC 512) does not provide liability insulation to the mere content creator or publisher.

The Digital Millennium Copyright Act (the DMCA) is a federal statute that may stop a copyright infringement claim in the United States in its tracks if what you do fits within its definition of a “service provider.” I say “may” because the DMCA statute is very specific, and there are quite a few hoops to jump through. Simply put, the DMCA may help reduce the risk of a lawsuit for copyright infringement, but it will not stop a lawsuit from being filed in every case. What you may need to realize, however, is that the DMCA’s protections do not apply to everyone, and it’s better to find this out sooner than later.

(NB: The contents of this article are solely concerned with US copyright law. If a non-US entity writes to you about content posted outside of the US, your liability arises from laws outside of the US, and the DMCA will provide absolutely no insulation unless a claim arises under the US Copyright Act.)

The DMCA provides a mechanism for an owner of copyrightable material to send a demand—a take-down notice—to a service provider, demanding removal of copyrightable material that is uploaded or displayed without authorization. If the service provider accused of hosting or displaying the material follows the take-down requirements, it can obtain insulation from a claim for money (called a claim for monetary damages), or a claim for injunctive relief.

Does it affect me?

If you are a service provider, including a web host, content publisher, or transmitter of content, the protections of the DMCA may apply to you. The DMCA protections do not apply if you do not fall within the service provider definition.

If you are a creator of online content—or any copyrightable material—you may be required to respond to a DMCA take-down notice.

What type of content does the DMCA regulate?
The DMCA uses the term material to mean any copyrightable work including written text (also referred to as literary works), visual works, graphic works, or musical work protectable under the Copyright Act.

Does the DMCA cover design as well as content? The DMCA covers copyrightable works, period.

Is software code material?

Yes, if it is displayed on an online service. Even code that passes between two computers is a literary work and protectable under the Copyright Act. Thus, code is material under the DMCA.

Sending or receiving a take-down notice

OK, so you’ve got online content. Or you’ve seen your content online on a site that you did not authorize or license. If you find yourself in the position of sending or receiving a take-down notice, this informal checklist will help you get it right the first time:

Follow the rules

If you get a take-down notice from your web host, client, or content publisher, you should understand the implications of taking down the content. You should also know that the DMCA (17 USC 512) does not provide liability insulation to the mere content creator or publisher. The insulation is for your web host, and the reason you should respond to their take-down request is 1) so your web host or client doesn’t terminate your contract; and 2) to avoid any further claims of damages by the party that alleges infringement.

If you put the content up—on your own web site, for example—the DMCA is not going to work for you if you receive a take-down notice, but it will dictate how you respond. So take note.

If you wonder whether you might qualify under the first prong of the DMCA for liability insulation, is the transmission of the material initiated by or at the direction of a person other than you? If the answer is yes, then you might fit into the definition of a protected service provider. (Please call your lawyer to find out why I say you might fit into the definition and have some liability insulation. It is not a complete immunity.) If you qualify and follow the rules, you—a service provider—are not liable for money damages or injunctive relief for copyright infringement.

Get your notice done right

So you don’t fit the service provider definition, but your client has sent a take-down notice to you. Now what? Is the take-down notice complete and effective? If it is not, that does not relieve you from any liability, but it may slow down the complaining party and may also slow down your web host form turning you off or from taking down your content.

To send an effective notice, the injured party must put it in writing. An irate phone call won’t work. The notice must meet the following requirements:

* Be signed by an authorized person (either the owner of an exclusive right that is allegedly infringed, or their agent).
* Identify what was infringed. Specifically, it must list or describe the copyrighted work claiming to have been infringed, or, if multiple copyrighted works at a single online site are covered by a single notification, a representative list of such works at that site.
* Identify the material that is infringing—with “reasonably sufficient” detail—to permit the service provider to locate the material.
* Include the complaining party’s complete contact information.
* Include the following statements:
o “The complaining party has a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law;” and
o “The notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.”

So, if the notice does not meet these requirements, you will be in a position to write to your web host or client—or the party that sent you the notice—to request a corrected notice. You should still begin to think about how you will defend against an alleged claim for infringement because remember, the DMCA does not insulate non-service providers.

Agent designated?


If you’re a service provider seeking insulation under DMCA you must designate an agent with the US Copyright Office. If you’re trying to track down the copyright agent, look at the Copyright Office’s DMCA agent listing. If your alleged infringer does not have an agent, this could be a mark against them.

Take reasonable steps to contact complainer with inadequate notice. If a notice comes in that substantially complies, contact the complainer promptly to maintain insulation.

Take down the content in accordance with 512(g). (Remember that this article is general info only and may not apply to your situation. Nothing substitutes for talking to a lawyer about your factually specific situation). If you are a web designer, you should carefully:

* consider taking down your content or disabling access;
* preparing a counter-notice (see below) within ten days of receipt of the original take-down notice to refute its allegations and demand that your web host replace the content;
* monitor the original posting to see if your web host did in fact replace the removed material and cease disabling in not less than ten and not more than fourteen business days after receipt of counter notice, unless the content creator has received notice from complainant that an action has been filed.

What counter-notice is needed?

Your web host sends you a take-down notice they received from a third party. Now what? Without delay, send a counter-notice to the web host’s designated agent that:

* has your signature;
* states that the content in question has been removed and disabled, along with the location where it was;
* states, “the subscriber has a good faith belief that the material was removed or disabled as a result of mistake or misidentification of the material to be removed or disabled”;
* includes your complete contact info and a statement that “the subscriber consents to the jurisdiction of Federal District Court for the judicial district in which the address is located, or if the subscriber’s address is outside of the United States, for any judicial district in which the service provider may be found, and that the subscriber will accept service of process from the person who provided notification under subsection©(1)(C) or an agent of such person.”

If your content is taken down and you receive a take-down order, your recourse against your web host is limited.

Conclusion

If you provide content for others and are not a mere passive conduit, a web host or publisher, the DMCA may not provide protection for you from a lawsuit but it does provide a mechanism you must follow. Pay close attention to where your content is posted, hosted, and published. Look for the copyright agent registered with the Copyright Office. Follow the take-down notice specifics precisely, and do not let up until you get the proper response from your take-down notice.

The DMCA may be a tool to protect the web host and content publisher, but its effect may be to put a huge burden onto the shoulders of the content owner. If you are the content owner, knowledge of the DMCA will be an important tool in your arsenal.

Article Source

Read More......

CSS Controlled Web Design - Tables are for sitting at...

By now, most web designers are aware of the many benefits of using CSS (Cascading Style Sheets) to control the formatting and appearance of text elements within their web pages.

Indeed, if applied as outlined in one of my articles from 2006 ( CSS - Weight-Loss for your Code), Cascading Style Sheets can substantially cut down the amount of code needed to present a web page in a polished and professional manner.


What few designers realise however, is that CSS is capable of so much more than just handling a page's text formatting.

If used to its fullest capability, the Style Sheet is capable of controlling just about every aspect of page layout and presentation, even to the extent of replacing a Hyper-Text document's traditional table-based design structure.
Quite aside from saving the web developer a substantial amount of coding time, this approach also cuts down the amount of code needed to display a web page properly to an absolute minimum. So much so that in the recent redesign of one of our web sites, the use of CSS controlled HTML cut the average document size from 24kb to less than 5kb.

The key to designing CSS controlled web pages, rests in the use of DIV Tags and DIV IDs

For example, a traditional table structure would look something like this:


Please note that angle brackets have been replaced by square brackets to display the following code correctly.

[table width="800" align="center" cellpadding="0" cellspacing="0"]
[tr]
[td width="560" align="left" class="one"][h1]Example Text[/h1][/td]
[td width="240" align="left" class="two"] [img xsrc="images/exampleimage.jpg" width="200" height="100" alt="Example Image"][/td]
[/tr]
[/table]

With CSS control, exactly the same look and feel can be achieved by the following two DIV Tags:

[div id="content"][h1]Example Text[/h1][/div]
[div id="image"][img xsrc="images/exampleimage.jpg" width="200" height="100" alt="Example Image"] [/div]

The DIV ID passes control of layout and appearance to the CSS, which handles it as follows:

#content {
position:absolute;
width: 560px;
height: 100px;
;top: 10px;
left: 100px;
font-family: Arial, Helvetica, sans-serif;
font-size: 12px;
font-weight: normal;
color: #000000;
background-color: #FFFFFF;
}

#image {
position:absolute;
width: 240px;
height: 100px;
top: 10px;
left: 660px;
font-family: Arial, Helvetica, sans-serif;
font-size: 12px;
font-weight: normal;
color: #000000;
background-color: #FFFFFF;
}

On the face of it, it may seem like this entails some extra work on the designer's part, but don't forget that at the same time as controlling the DIV Tag's position and appearance, the CSS also handles all text formatting, and that the above Style Sheet will only need to be written once in order to control an entire web site.
Then of course there is the fact that the above example is an immensely simple one. Imagine for a moment, the sheer amount of code which is saved by using CSS over the course of writing an in-depth web page.

The end result is an HTML document which has been stripped of all unnecessary code and is consequently extremely 'light-weight' and easily indexed by search engines.

Additionally, it is also possible to radically alter a page's appearance at the click of a button without ever changing any of its HTML code. This approach is very capably demonstrated at the CSS Zen Garden, where more information about the power of CSS controlled web design can be found.

Furthermore, like HTML, CSS is undergoing constant revisions and will doubtlessly grow to play an even more important part in web design during years to come. Therefore, now may be a good time to further acquaint yourself with the full functionality of this essential web design element.

Article Source

Read More......

Saturday, September 20, 2008

How a Perfect PHP Pagination Works ?

Pagination is a topic that has been done to death -- dozens of articles and reference classes can be found for the management of result sets ... however (and you knew there was a "however" coming there, didn't you?) I've always been disgruntled with the current offerings to date. In this article I offer an improved solution.

Some pagination classes require parameters, such as a database resource and an SQL string or two, to be passed to the constructor. Classes that utilize this approach are lacking in flexibility - what if you require a different formatting of page numbers at the top and bottom of your pages, for example? Do you then have to modify some output function, or subclass the entire class, just to override that one method? These potential "solutions" are restrictive and don't encourage code reuse.

This tutorial is an attempt to further abstract a class for managing result pagination, thereby removing its dependencies on database connections and SQL queries. The approach I'll discuss provides a measure of flexibility, allowing the developer to create his or her very own page layouts, and simply register them with the class through the use of an object oriented design pattern known as the Strategy Design Pattern.
What Is the Strategy Design Pattern?

Consider the following: you have on your site a handful of web pages for which the results of a query are paged. Your site uses a function or class that handles the retrieval of your results and the publishing of your paged links.

This is all well and good until you decide to change the layout of the paged links on one (or all) of the pages. In doing so, you're most likely going to have to modify the method to which this responsibility was delegated.

A better solution would be to create as many layouts as you like, and dynamically choose the one you desire at runtime. The Strategy Design Pattern allows you to do this. In a nutshell, the Strategy Design Pattern is an object oriented design pattern used by a class that wants to swap behavior at run time.

Using the polymorphic capabilities of PHP, a container class (such as the Paginated class that we'll build in this article) uses an object that implements an interface, and defines concrete implementations for the methods defined in that interface.

While an interface cannot be instantiated, it can reference implementing classes. So when we create a new layout, we can let the strategy or interface within the container (the Paginated class) reference the layouts dynamically at runtime. Calls that produce the paged links will therefore produce a page that's rendered with the currently referenced layout.

http://www.sitepoint.com/article/perfect-php-pagination/

Read More......

What's new in PHP 5.3 ?

PHP 6 is just around the corner, but for developers who just can't wait, there's good news -- many of the features originally planned for PHP 6 have been back-ported to PHP 5.3, a final stable release of which is due in the first half of this year.

This news might also be welcomed by those that wish to use some of the new features, but whose hosting providers will not be upgrading to version 6 for some time -- hosting providers have traditionally delayed major version updates while acceptance testing is performed (read: the stability has been proven elsewhere first). Many hosting companies will probably delay upgrading their service offerings until version 6.1 to be released. A minor upgrade from 5.2.x to 5.3, however, will be less of a hurdle for most hosting companies.

This article introduces the new features, gives examples of where they might be useful, and provides demo code to get you up and running with the minimum of fuss. It doesn't cover topics such as installing PHP 5.3 -- the latest development release of which is currently available. If you'd like to play along with the code in this article, you should install PHP 5.3, then download the code archive. An article on installing PHP 5.3 can be
found on the Melbourne PHP Users Group web site.
Namespaces

Before the days of object oriented PHP, many application developers made use of verbose function names in order to avoid namespace clashes. Wordpress, for example, implements functions such as wp_update_post and wp_create_user. The wp_ prefix denotes that the function pertains to the Wordpress application, and reduces the chance of it clashing with any existing functions.

In an object oriented world, namespace clashes are less likely. Consider the following example code snippet, which is based on a fictional blogging application:

Read More Here

Read More......

Google Chrome's Multi-process Architecture

Unlike most current web browsers, Google Chrome uses many operating system processes to keep web sites separate from each other and from the rest of your computer. In this blog post, I'll explain why using a multi-process architecture can be a big win for browsers on today's web. I'll also talk about which parts of the browser belong in each process and in which situations Google Chrome creates new processes.

1. Why use multiple processes in a browser?

In the days when most current browsers were designed, web pages were simple and had little or no active code in them. It made sense for the browser to render all the pages you visited in the same process, to keep resource usage low.

Today, however, we've seen a major shift towards active web content, ranging from pages with lots of JavaScript and Flash to full-blown "web apps" like Gmail. Large parts of these apps run inside the browser, just like normal applications run on an operating system. Just like an operating system, the browser must keep these apps separate from each other.

On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time. These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash. Also, rendering engines routinely face untrusted and even malicious code from the web, which may try to exploit these bugs to install malware on your computer.

In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security. If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open. Web apps often have to compete with each other for CPU time on a single thread, sometimes causing the entire browser to become unresponsive. Security is also a concern, because a web page that exploits a vulnerability in the rendering engine can often take over your entire computer.

It doesn't have to be this way, though. Web apps are designed to be run independently of each other in your browser, and they could be run in parallel. They don't need much access to your disk or devices, either. The security policy used throughout the web ensures this, so that you can visit most web pages without worrying about your data or your computer's safety. This means that it's possible to more completely isolate web apps from each other in the browser without breaking them. The same is true of browser plug-ins like Flash, which are loosely coupled with the browser and can be separated from it without much trouble.

Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself. This means that a rendering engine crash in one web app won't affect the browser or other web apps. It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won't lock up if a particular web app or plug-in stops responding. It also means we can run the rendering engine processes in a restrictive sandbox that helps limit the damage if an exploit does occur.

Interestingly, using multiple processes means Google Chrome can have its own Task Manager (shown below), which you can get to by right clicking on the browser's title bar. This Task Manager lets you track resource usage for each web app and plug-in, rather than for the entire browser. It also lets you kill any web apps or plug-ins that have stopped responding, without having to restart the entire browser.

Read More Here

Read More......
Your Ad Here
Reader's kind attention....The articles contained in this blog can be taken from other web sites, as the main intention of this blog is to let people get all sides of the web technologies under the single roof..so if any one finds duplication or copy of your articles in this blog and if you want that to be removed from this ..kindly inform me and i will remove it...alternatively if you want me to link back to your site with the article...that can also be done...

Thanks,
Webnology Blog Administrator
 

blogger templates