Find out what the 12 most troublesome facts of development are, and learn how they can affect efficiency.
Fact 1: Maintenance cost is the biggest enemy of efficiency.
Fact 2: Developers don't like to test, and they don't test their code the way they should.
Fact 3: When a bug is discovered, it takes much more time to isolate it than to correct it.
Fact 4: It takes time for a developer to switch from one project to another.
Fact 5: Bugs increase as the number of developers increases.
Fact 6: Tests have several layers.
Writing good tests is a tough challenge. For example, writing unit tests on existing code is very difficult, and they cannot test everything. Functional (QA) tests are powerful for challenging your application, but they usually run slowly and fail to give enough information about the cause of the problem.
So you must divide and conquer. Running huge tests is useless if the fast ones do not pass. Create test layers from the simplest to the most complex:
* Level 0: Unit tests on the developer's own environment
* Level 1: Unit tests on a build server, started on (almost each) commit
* Level 2.1: Integration tests, where you try to have several modules connected together (You replace the mock objects with the actual modules you created. For instance, you would use an actual XML parser instead of a mock one.)
* Level 2.2: Integration tests, with access to the system (socket, disk, etc.) (Here you can test the actual access to the system.)
* Level 3: Functional tests, where you test the visible part of your application (actual functionalities seen by the user) (This layer is certainly not enough.)
* Level 4: Performance tests
* Level 5: Customer use (OK, this is not a test layer, but it certainly is a source of surprising new bugs.)
You need to start the more complex tests only if the simple ones pass.
The motto of this methodology is a bug should never appear twice. Therefore, for every bug you find, you should create a test that guarantees that it will never happen again. Sometimes the test is a unit test (which is better), sometimes it is an integration test.
The sooner you get the error, the sooner you correct the issue. So you'd better begin running tests at Level 0 rather than Level 3, and the code quality tends to increase naturally with the number of tests.
Fact 7: You must have a restricted number of languages.
Fact 8: Tracking change and innovation is necessary but difficult.
Fact 9: To stay productive, a developer must not switch activities too often.
Fact 10: Defining the architecture is as important as coding.
Fact 11: Developers have different priorities than product owners do.
Fact 12: Coding conventions are efficient; they must be imposed.
This Blog consist of internet technologies focusing on web world i.e web technologies.
Tuesday, January 30, 2007
10 things you should know before submitting your site to Google
The same way you clean up your house before your guests arrive, the same way you should get your website ready for Google’s crawler, as this is one of the most important guests you will ever have. According to that, here are 10 things you should double check before submitting your website to the index. If you want, you can view this article as the correction of the top 10 mistakes made by webmasters.
1. If you have a splash page on your website, make sure you have a text link that allows you to pass it.
I’ve seen many websites with a fancy flash introduction on the index and no other way to navigate around it. Well, Google can’t read into your flash page, and therefore it cannot bypass it. All you have to do is put a text link to your website’s second index, and the deed is done.
2. Make sure you have no broken links
I know this is kind of obvious, but you’ll be surprised to find out how many errors is the Google crawler experiencing daily due broken links. Therefore, you’d better check and double check every internal link of your webpage before submission. Don’t forget that your links are also your visitor’s paths to your content. It’s not all about Google, you know :)
3. Check the TITLE tags
Since you are able to search in title tags on Google and since the title tags is displayed in the top of your browser window, I’d say this is an important aspect you need to check. This doesn’t mean you have to compile a >20 keywords list there. Instead, make it a readable sentence since it’s viewable by both crawlers and surfers.
4. Check the META tags
Rumors about Google not caring about META tags are not 100% correct. Google relies on these tags to describe a site when there’s a lot of navigation code that wouldn’t make sense to a human searcher, so why not make sure you’re all in order and set up some valid KEYWORDS and a valid DESCRIPTION. You never know.
5. Check your ALT tags
The ALT tags are probably the most neglected aspect of a website since no one bothers to put them in order. It’s definitely a plus if you do, so Google spider can get a clue about all of your graphics. However, don’t go extreme and start explaining in an ALT tag that a list bullet is a list bullet.
6. Check your frames
If you use frames on your website, you might not be indexed 100%. Google actually recommends that you read an article of Danny Sullivan on Search Engines and Frames. You have to make sure that either Google can read from your frames, either that it has an alternative, defined via the NOFRAMES tag.
7. Do you have dynamically generated pages?
I know the web evolved so much in the last period of time, and more and more websites based on dynamic scripting languages (PHP, ASP, etc) are coming out every second, but Google said they are limiting the amount of dynamic webpages they’re indexing. It’s not too late to consider a compromise and include some static content in your pages. It helps.
8. Update your content regularly
This is an important aspect that you should consider, since Google indexes more quickly pages that get updated on a regular basis. You will notice that the number of pages indexed by the search engine will increase day by day if you update, but will stagnate or decrease if you don’t bring something new. I suggest setting up a META option in the header that will tell Google how frequently should it come back for a reindexing.
9. The robots.txt
This file is a powerful resource if used properly. You have the chance to filter out the bots that crawl your website, and you have the chance of restricting access to certain URL’s that should not be indexed (login pages, admin backends, etc).
10. To cache or not to cache?
Google caches some webpages for quick access, but some webmasters do not like that. The procedure is quite simple. All you have to do is write a line of code between your HEAD tags.
META NAME=”ROBOTS” CONTENT=”NOARCHIVE” - should be enough to stop all robots from caching and archiving the page where the code is embedded.
All these being said, you can now submit your website to Google’s index.
1. If you have a splash page on your website, make sure you have a text link that allows you to pass it.
I’ve seen many websites with a fancy flash introduction on the index and no other way to navigate around it. Well, Google can’t read into your flash page, and therefore it cannot bypass it. All you have to do is put a text link to your website’s second index, and the deed is done.
2. Make sure you have no broken links
I know this is kind of obvious, but you’ll be surprised to find out how many errors is the Google crawler experiencing daily due broken links. Therefore, you’d better check and double check every internal link of your webpage before submission. Don’t forget that your links are also your visitor’s paths to your content. It’s not all about Google, you know :)
3. Check the TITLE tags
Since you are able to search in title tags on Google and since the title tags is displayed in the top of your browser window, I’d say this is an important aspect you need to check. This doesn’t mean you have to compile a >20 keywords list there. Instead, make it a readable sentence since it’s viewable by both crawlers and surfers.
4. Check the META tags
Rumors about Google not caring about META tags are not 100% correct. Google relies on these tags to describe a site when there’s a lot of navigation code that wouldn’t make sense to a human searcher, so why not make sure you’re all in order and set up some valid KEYWORDS and a valid DESCRIPTION. You never know.
5. Check your ALT tags
The ALT tags are probably the most neglected aspect of a website since no one bothers to put them in order. It’s definitely a plus if you do, so Google spider can get a clue about all of your graphics. However, don’t go extreme and start explaining in an ALT tag that a list bullet is a list bullet.
6. Check your frames
If you use frames on your website, you might not be indexed 100%. Google actually recommends that you read an article of Danny Sullivan on Search Engines and Frames. You have to make sure that either Google can read from your frames, either that it has an alternative, defined via the NOFRAMES tag.
7. Do you have dynamically generated pages?
I know the web evolved so much in the last period of time, and more and more websites based on dynamic scripting languages (PHP, ASP, etc) are coming out every second, but Google said they are limiting the amount of dynamic webpages they’re indexing. It’s not too late to consider a compromise and include some static content in your pages. It helps.
8. Update your content regularly
This is an important aspect that you should consider, since Google indexes more quickly pages that get updated on a regular basis. You will notice that the number of pages indexed by the search engine will increase day by day if you update, but will stagnate or decrease if you don’t bring something new. I suggest setting up a META option in the header that will tell Google how frequently should it come back for a reindexing.
9. The robots.txt
This file is a powerful resource if used properly. You have the chance to filter out the bots that crawl your website, and you have the chance of restricting access to certain URL’s that should not be indexed (login pages, admin backends, etc).
10. To cache or not to cache?
Google caches some webpages for quick access, but some webmasters do not like that. The procedure is quite simple. All you have to do is write a line of code between your HEAD tags.
META NAME=”ROBOTS” CONTENT=”NOARCHIVE” - should be enough to stop all robots from caching and archiving the page where the code is embedded.
All these being said, you can now submit your website to Google’s index.
Subscribe to:
Posts (Atom)